service desk value through meaningful metrics

30
Demonstrating Service Desk Value Through More Meaningful Metrics Written by Daniel Wood, Head of Research, Service Desk Institute In collaboration with

Upload: daniel-artimon

Post on 20-Jul-2016

10 views

Category:

Documents


1 download

DESCRIPTION

Service Desk Value Through Meaningful Metrics

TRANSCRIPT

Page 1: Service Desk Value Through Meaningful Metrics

Demonstrating Service Desk Value Through More Meaningful MetricsWritten byDaniel Wood, Head of Research, Service Desk Institute

In collaboration with

Page 2: Service Desk Value Through Meaningful Metrics

2 DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Table of Contents

Introduction 3

Key Findings 3

Part One – Demonstrating Value 4

1. Which of the following performance metrics do you currently measure? 4

2. What one metric is most important to the Service Desk? 5

3. Who is responsible for producing/measuring Service Desk metrics and reporting on them? 6

4. How often do you produce Service Desk metrics reports? 7

5. Do you communicate/publish your metrics targets? 7

6. To whom do you present your Service Desk metrics? 8

7. Do customers/end users ask to see metrics reports? 8

8. Sharing metrics with the business has helped to improve the Service Desk’s relationship with the business 9

9. Does the business make decisions based on the metrics you produce? 9

Part Two – Metrics and the Business 10

1. What metrics/information does your business ask you for? 10

2. What one metric is most important to the business? 11

3. What information does your business want to see you produce more of? 11

4. Which of these cost based metrics do you currently measure? 12

5. Which, if any, of these 5 business value metrics do you currently measure? 12

Part Three – Business Value Metrics 13

Presenting Business Value Metrics 14

Moving Beyond the Basics – Delivering True Business Value 15

Part Four – Metrics Best Practice 16

The SDI 17 best practice metrics 16

Displaying Metrics Information – SDC best practice guidelines 16

Why do Metrics Matter? 22

Conclusion 23

Part Five – Interviews 24

Page 3: Service Desk Value Through Meaningful Metrics

3DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

IntroductionWelcome to the SITS13 – The Service Desk & IT Support Show white paper produced in collaboration withSDI and Cherwell Software. The motivation for this white paper started with an understanding that metricshave always been a vital component of our Service Desks. Since their very genesis, Service Desks have soughtways to understand their performance and set targets. Today, the story of metrics is one of reporting on awide-range of performance measures, each designed to demonstrate that the Service Desk is delivering valueand quality to their organisation. Metrics also provide management with the quantitative data they need tomake accurate and reasoned decisions – there is widespread adherence to the mantra that ‘you can’t managewhat you can’t measure’. However, in the IT world of today it is more than just the ability to manage, it isincreasingly about ‘value’ and providing metrics to substantiate this to your organisation.

This is a research report of two halves: the first is a state of nation survey, which offers a view of metrics righthere and now. What metrics are important to Service Desks? How are they using metrics in their everydayService Desk life? Who are they sharing metrics information with and why? It also looks to the future: whatshould we be looking at now to ensure we stay relevant and meaningful in the years and decades to come?Which business related metrics are offering fresh insights and perspectives? What tools are available now toachieve this? This white paper will reveal the answers and aims to prepare you for the future.

The second half provides additional best practice metrics guidance from the Service Desk industry’s leadingauthority, the Service Desk Institute, accompanied by interviews with the respondents to our online survey.

Executive SummaryThis white paper identifies the range and use of metrics today in the Service Desk industry. The results for thissurvey, which ran in January and February 2013, were obtained from an online survey sent to more than 5,000ITSM professionals. Additional evidence and opinion was gleaned from personal interviews conducted withService Desk professionals and consultants by the author - their insights provide valuable context to thequantitative data displayed in this white paper.

This research study reveals there is a widespread acceptance and adoption of metrics, and that quantitativeinformation is playing a vital role in the decision making process. It is also shown that many Service Deskshave adopted a variety of industry best practice metrics, but there is a wide range of opinion on which metricis the most important. This can be explained by the fact that every Service Desk is different and seeks a rangeof information to enable it to make important decisions surrounding resourcing, service delivery and customersatisfaction. The interviews reveal that Service Desk professionals are interested in enhancing and broadeningthe range of metrics they measure and report on and are keen to produce information of interest to thebusiness: there is a clear desire to engage and work with the business to drive performance forward.

We also see that metrics are changing the way organisations work by making information available to a widerrange of people than ever before. It is becoming clear that adoption of business value metrics will form a keycomponent of the transition from traditional Service Desks towards business service centres.

Key FindingsThe most important metric to both the Service Desk and the Business: ‘resolved within SLA’

The business wants to see more information on performance against SLAs

The most common metric businesses ask for is ‘resolved within SLA’

For most Service Desks, the Service Desk Manager is responsible for producing metrics reports

Service Desk reports are most frequently produced on a monthly basis

Metrics reports are usually presented to senior management

Only 28 per cent of customers ask to see metrics reports

53 per cent believe sharing metrics reports has improved the Service Desk’s relationship with the business

50 per cent of businesses make decisions based on metrics

The most common cost based metric is ‘cost of IT operation’

40 per cent of respondents do not currently measure any business value metrics

Page 4: Service Desk Value Through Meaningful Metrics

1. Which of the following performance metrics do you currently measure? (Choose all that apply)

Metric %

Number of incidents and service requests 92

% of incidents resolved within SLA 65

Incident resolution time 63

Average time to respond 54

Backlog/open incidents 53

First contact resolution rate 47

Abandon rate 43

Comparison of SLA goals to actuals 42

Average resolution time by priority 25

Average resolution time by incident category 20

Re-opened incident rate 17

Remote control and self-help usage 14

Hierarchical escalations 13

Functional escalations 10

Cost per incident or service request 10

Total cost of ownership 6

Relative cost per incident per channel 4

None 2

The above 17 measures are taken from SDI’s international best practice standard. The standard, created incollaboration with industry experts from across the globe, prescribes the metrics the industry should measurein order to deliver value to the business and drive performance. A comprehensive guide to these 17 measuresis included in part four of this white paper.

The results to this question reveal that 92 per cent of Service Desks are currently measuring the number ofincidents and service requests received. We would expect this metric to be the most ubiquitous as without it,the Service Desk would be unable to resource effectively or ensure it was operating at its correct capacity.The number of incidents and service requests signifies, in very broad terms, the ‘work’ the Service Desk getsthrough on a daily basis – trended over time this metric will reveal if the Service Desk is becoming busier andif more or less resource is needed to respond to customer interactions in a timelyfashion.

Second on the list is percentage of incidents resolved within SLA. This metric is themeasure of the Service Desk’s adherence to the contract that exists between theService Desk and the business. It is this metric that demonstrates whether the ServiceDesk is delivering on its contractual agreements, and is of great interest to the ServiceDesk and the business as both have a vested interest. For many Service Desks, thismetric – along with customer satisfaction – is their key performance indicator.

Rounding out the top three is incident resolution time with 63 per cent ofrespondents choosing this option. As with incidents resolved within SLA, this metricenables Service Desks to understand how long they are taking to resolve incidentsand thus can make effective decisions surrounding areas such as resourcing,performance, and service improvement amongst others.

Interestingly, all the cost-based metrics feature in the bottom four, with only 10 percent measuring the cost per call or service request. This low percentage marries with SDI’s own historicalresearch that highlights only a small percentage of Service Desks understand their cost per call or email. Manyobservers find it implausible that so few Service Desks measure these metrics given that spending and costsare constantly under the financial microscope. The answer for this low figure, as supported by the evidencefrom our interviewees, is that Service Desks find it very difficult to get a firm handle on their costs, althoughmany would like to have a better understanding. A simple formula for calculating cost-based metrics isincluded in part four of this white paper.

4 DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part One Demonstrating Value

“In today’s competitive

economy everyService Desk

should know itscost base and seek

to make itself more effective and

productive.”HOWARD KENDALL,

FOUNDER, SDI

Page 5: Service Desk Value Through Meaningful Metrics

2. What one metric is most important to the Service Desk? (Select one only)

Metric %

Resolved within SLA 24

Customer satisfaction 19

First Time Fix Rate/First Contact Resolution 13

Average Time to Respond 11

Number of incidents 8

Number of open incidents 7

Cost based metrics (cost of operation, cost per call etc.) 6

Average resolution time 5

Availability of services 2

Number of incidents resolved 2

Abandon rate 1

Number of incidents per service 1

Number of incidents escalated to problems 1

As shown, resolved within SLA and customer satisfaction topped the list of the most important metrics for theService Desk. Some of the comments that accompanied these two metrics are that resolved within SLA wasthe ultimate assessment of Service Desk performance as it demonstrated the ability to respond effectively andenables users to get back to work quickly and efficiently.

For customer satisfaction, it was considered the combination of all other metrics – if performance was strongacross a range of metrics, customer satisfaction therefore would be good as well. As one respondent noted, itis the customer who passes the overall judgement about the service that is provided.

Rounding out the top three is First Time Fix Rate/First Contact Resolution (these are incidents that areresolved whilst speaking to the user and without having to ask for additional help or assistance). One of thecomments for this metric is that it was the most important measure for the Service Desk as it depicted thecheapest method of support to the business and is the most cost effective, whilst also ensuring maximum userproductivity.

5DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part One: Demonstrating Value

Page 6: Service Desk Value Through Meaningful Metrics

3. Who is responsible for producing/measuring Service Desk metrics and reporting on them? (Select one only)

Job title %

Service Desk Manager 29

Multiple individuals 24

Other, please enter role 10

Service Delivery Manager 10

Service Desk Team Leader 9

IT Manager 9

Not one single person 5

Service Desk Analyst 3

Business Relationship Manager 2

The most popular option was Service Desk Manager, which is a result we would expect given it is them who,in many cases, have overall control, visibility and responsibility for the Service Desk operation. However, theresults above also show there are often multiple individuals involved in creating metrics reports. This willmostly be where individuals have different areas of responsibility or the reporting function might be too time-consuming for one person on their own. One of the Service Desk’s biggest issues with their Service Desksolution is that it is difficult and laborious to extract the information and data required to produce metricsreports.

The results also show that the responsibility for producing metrics reports fall to people not included in theabove categories. For those who selected the ‘other’ option, the job titles include the following:

Customer Support Manager

Knowledge and Reporting Manager

Awareness and Service Improvement Team

Knowledge Manager

Customer Service Manager

Head of IT

Problem and Reporting Analyst

IT Contractors

Service Support Manager

Service Delivery Coordinator

Operations Manager

6 DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part One: Demonstrating Value

Page 7: Service Desk Value Through Meaningful Metrics

4. How often do you produce Service Desk metrics reports? (Select one only)

Frequency %

On an ad-hoc basis 10

Daily 11

Weekly 27

Fortnightly 2

Monthly 46

Quarterly 3

Every 6 months 1

SDI’s Service Desk standards recommend that metrics reports are produced on a monthly basis anddisseminated as far and wide throughout the business as possible. The results above show that monthly is themost common frequency, although over a quarter of respondents produce metrics reports on a weekly basis.

Of course, the variety and range of information may vary in the monthly and weekly report and may be sharedamongst different groups. For example, the weekly report may just be used within the Service Desk tohighlight that week’s performance and where it has hit or missed targets. The monthly report may delve intomore detail, include additional and a broader range of metrics, and may be presented to and with a differentaudience in mind.

As with all metrics, the key is to consider how the reports are used and what decisions can be made on thestrength of the available data. Many long-term strategic decisions can only be made with a credible amountof historical data trended over time.

5. Do you communicate/publish your metrics targets?

Every metric the Service Desk records should have a target or goal attached to it. Without a target, it isdifficult to accurately gauge performance and trigger alerts if certain metrics are about to breach targets.Targets should be intelligent – they should be reviewed and adjusted in line with Service Desk performance -and aspirational but not to the point that they damage morale if they are consistently missed. Used correctly,targets can be a great motivator and provide something tangible for the whole team to aim for.

The results show that just two-thirds of Service Desks communicate and publish their targets. Targets providean easy way to assess Service Desk performance and are helpful in making sense of data.

It is troubling that 32 per cent of Service Desks do not communicate metrics targets. The Service Desk needsto be open and visible if it is to become a trusted business partner.

7DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part One: Demonstrating Value

32%

68%

Page 8: Service Desk Value Through Meaningful Metrics

6. To whom do you present your Service Desk metrics? (Choose all that apply)

Audience %

IT Director 46

General senior management 44

Internal customers 30

Just within IT 30

Executive team 28

Just within the Service Desk 27

CIO 21

Internal/external customers 16

External customers 6

It is interesting that just under a third of the overall responses (30 per cent) indicated that metrics reports areonly shared within IT and 27 per cent said just within the Service Desk. This raises interesting questions aboutwho the data is produced for and the level of interest audiences have in what is presented. Many ServiceDesks would say that internal and/or external customers are not interested in seeing Service Desk metrics.This might be true, but Service Desks should consider what information the customer is maybe interested inand how to present that information.

Part three of this white paper looks to the future of Service Desk metrics and examines the metrics that willprovide the business with a clear understanding of Service Desk performance and how the Service Desk addsvalue to the business. For metrics to evolve and for the Service Desk to form tighter business relationships,metrics need to offer information the business, its employees and its customers find genuinely interesting anduseful. Sharing this information will help to drive engagement levels, and it will be increasingly important forall parties to have visibility of relevant and important Service Desk information. This will move the ServiceDesk forward and break down the barriers that currently confine it.

7. Do customers/end users ask to see metrics reports?

Following on from question six, the majority of customers/end users are not interested in seeing Service Deskreports. The reason for this could be because of a general lack of engagement with the user population, or asexplained, perhaps the Service Desk does not currently produce information that is useful and/or of interestto its customers.

8 DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part One: Demonstrating Value

28%

72%

Page 9: Service Desk Value Through Meaningful Metrics

8. Sharing metrics with the business has helped to improve the Service Desk's relationship with thebusiness…

In essence, this is the crux of why the Service Desk wants to share information with the business – it is keen toimprove its relationship and become a trusted business partner. As shown, by sharing information, 53 per centof Service Desks believe it has helped to improve the Service Desk’s relationship with the business. This is anencouraging result and demonstrates the benefits that can be realised through communicating metrics in thecorrect way. It also shows there is a level of demand and interest for metrics from the business and that thebusiness is keen to obtain insight into the Service Desk’s performance. It is vital for Service Desks toconstantly and consistently engage to get closer to the business and to create opportunities to sharefeedback and recommendations.

Also of significance is the 35 per cent who said they are unsure. These respondents are not certain thatsharing information is having any discernible benefits. For Service Desks in this position, it is worthconsidering how information is shared and whether there is any support or guidance available to help thebusiness make sense of it – is the business asking for certain information or is the Service Desk simplyproviding information it thinks the business will find useful? This is an important question to answer as realimprovements in the business/Service Desk relationship will only come when there are clear communicationchannels and an inherent understanding of the demands from both parties.

9. Does the business make decisions based on the metrics you produce?

A clear divide, and an important result as it demonstrates that only half of businesses are using Service Deskmetrics as a basis for decision making. However, it is important to note that the information provided by theService Desk is helping to improve the decision making process. One of the most important businessdecisions based on metrics is resourcing. Many Service Desks rely on metrics to help make the business casefor more staff or further investment.

9DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part One: Demonstrating Value

50%50%

35%38%

15% 10%

2%

Page 10: Service Desk Value Through Meaningful Metrics

Part one of this white paper focused on the Service Desk’s ability to demonstrate itsvalue and share this information with the business – part two moves on to the nextstep, which is to examine metrics from a business perspective. The business may beinterested in the same metrics the Service Desk currently measures or may wantmeasurements to help the business increase its understanding and make keydecisions.

1. What metrics/information does your business ask you for? (Please check all that apply)

These results show there is a strong business demand for metrics with only 14 per cent stating the businessdoes not ask for any metrics. The chart above identifies that the business is primarily interested in the ServiceDesk’s performance, with 80 per cent of respondents selecting this option. Second on the list is customersatisfaction, a key performance measure for the Service Desk as it is a true litmus test of the service providedto customers. These results show that the business is interested in understanding what customers think of theservice delivered by the Service Desk. Often, KPIs will provide a useful accompaniment to customersatisfaction – if customer satisfaction is low one month, this may correlate with a corresponding increase incall volumes and/or a lower first contact resolution rate.

10 DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part Two Metrics and the Business

“This will become more and more important astechnology is in almost all products andservices. Service Desk data shows what

outcomes are being implemented and can leadto service improvements.”

HOWARD KENDALL, FOUNDER, SDI

Page 11: Service Desk Value Through Meaningful Metrics

2. What one metric is most important to the business? (Select one only)

Metric %

Resolved within SLA 25

Customer satisfaction 19

Cost based metrics (cost of support operation, cost per call etc.) 18

Availability of services 11

Average resolution time 8

Average time to respond 7

Number of incidents/service requests 5

First Time Fix Rate 3

Lost service hours 2

Abandon rate 1

Backlog data 1

This question produced some interesting results. The top two choices are the same the Service Desk chose asthe most important metrics to them. Thus, there is a unity and common understanding of the importantmeasures. However, 18 per cent of respondents stated that cost based metrics were the most importantmeasure to the business, whereas for the Service Desk, this was the seventh most popular option. Clearly, thebusiness has a different expectation and looks for different measures to understand the performance of theService Desk. That being said, it is also true that the business, like the Service Desk, looks at adherence toSLAs and the service delivered to customers as the key measures of Service Desk performance.

3. What information does your business want to see you produce more of? (Please choose the mostimportant option only)

Clearly there is a demand from the business for performance based metrics, and it is keen to understand ifthe Service Desk is meeting its agreed SLAs. Following closely behind is customer satisfaction – the businesswants to understand if it has a satisfied user population and if there is a good level of service delivered by theService Desk.

Some comments include:

“Actually, the business is not particularly interested - it is us as a department that are interested in our ownperformance and define what we believe is good for the business.”

“The business is unsure what they want and what their requirements are.”

Marketing and sales • Balance of workload • User tips • Can’t get them to tell me • Breakdown of incidents

11DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part Two: Metrics and the Business

Page 12: Service Desk Value Through Meaningful Metrics

4. Which of these cost based metrics do you currently measure? (Please check all that apply)

It is encouraging that 91 per cent of respondents measure cost based metrics in some form. The ‘cost of theIT operation’ is the most popular cost based metric as this is where the Service Desk budget is derived fromand includes every cost associated with delivering service. For those that do not measure this cost, we canassume the responsibility for this function sits outside of IT, and it could be the case that the Service Deskdoes not have visibility of its operating budget.

Correlating closely with results in part one of this white paper, it is shown that only a small percentage ofrespondents measure cost per email, but many more measure the cost per call. A result of this difference is thatit becomes very difficult to assess the most cost effective channel for support. For example, by not measuringcost per channel, the Service Desk would have difficulty justifying investment in new technology such as livechat or social media as would not have a clear understanding if this offered a cheaper way of providing support.This is especially true for Service Desks that look to offer self-service and self-help – until you know the cost foreach of your contact channels, it becomes very difficult to create a comprehensive business plan.

5. Which, if any, of these 5 business value metrics do you currently measure? (Please check all that apply)

The metrics included above move beyond the realm of ‘traditional’ performance metrics – they look at the realvalue the Service Desk delivers to the business and the cost when IT fails. 37 per cent measure the businessimpact of IT failure and along with lost IT service hours and lost business hours, these metrics provide a truedepiction of the value of the Service Desk operation. It is encouraging that 60 per cent of Service Desks aremeasuring business value metrics as in the long term, this will help to bridge the gap between the ServiceDesk and the business as they reach a mutual understanding regarding the metrics that are important anduseful to both parties. More discussion of business value metrics is included in part three of this white paper.

12 DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part Two: Metrics and the Business

Page 13: Service Desk Value Through Meaningful Metrics

Traditional metrics have focused on telling the business how good the Service Desk isby detailing the number of calls answered, first time fix rate, incidents resolved withinSLA, and a diverse array of many other metrics. Business value metrics should becomprised of any measures that will be beneficial to the business and provide a clearidea of performance and value.

Introducing business metrics can be a huge leap of faith as they move beyond measures of how well the ITdepartment is doing. Business value metrics place IT’s business performance front and centre. Statistics suchas how many business hours have been lost due to IT faults can be disconcerting and is perhaps somethingthat most Service Desks are not comfortable sharing. However, it is expected that more and more businesseswill want access and visibility to these types of metric as they provide a crucial way to ascertain the value ofthe Service Desk and its place within the organisation.

Business value metrics also provide real benefit for the Service Desk as they offer tangible data to supportand augment business decisions. Justifications for extra budget or resource will be much more robust ifsupported by business value metrics. The idea is not to use the data to hide or disguise but to use thesemeasures as a platform to introduce future improvements and advancements. It is clear there is a need tomove beyond us and them: IT acts as a partner to and enabler for the business. Communicating value andmetrics in a way that the business understands is a critical step in improving this relationship. Understandwhat the business needs in terms of information; ask what information would be useful to it. Establishinganswers to these questions is a crucial step in building a bond between the business and the Service Desk.

What are business value metrics? The metrics below offer some indication of the type of metrics that should be considered when exploring thebusiness value metrics.

Lost IT service hoursThis metric is important because it provides the business with a clear indication of for how long IT serviceswere unavailable to the business. This is an example of a metric that provides real value and insight to thebusiness and will provide clear indications of performance. It will also provoke debate and discussionsurrounding why hours were lost and what actions can be taken to prevent lost hours in the future.

Lost business hoursThis provides a fuller picture of the impact of IT failure. Of course, not all businesses are entirely dependenton IT to function and operate. Like lost service hours, this metric provides the business with a clearunderstanding of the importance IT plays in its organisation. Lost business hours can then be furtherscrutinised to ascertain exactly how much revenue was lost due to IT failures.

13DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part ThreeBusiness Value Metrics

Page 14: Service Desk Value Through Meaningful Metrics

Business Impact This goes beyond the traditional metrics of saying how good the service provided actually is. Byunderstanding that different areas of the business have different levels of importance and can be affected to agreater or lesser degree by IT failure, the Service Desk starts to create a mature and business focused view onthe value it provides to the business. The example table below offers one such way to calculate the relativeimportance of each area of the business, and in turn, calculates the impact of lost IT availability.

Example: Note – each IT service is weighted according to its business importance/value.

IT Services Weighting Lost minutes Impact Rating

Website (external) 20% 300 6000

Server availability 50% 10 500

Email 15% 200 3000

Intranet 10% 30 300

Telephony 5% 350 1750

As the table shows, the biggest business impact was that the website was unavailable. However, even thoughthe server was only down for 10 minutes across the month, by virtue of it having the highest weighting, it wasaccountable for an impact of 500. Looking at the business impact of the failure of each service allows you tounderstand that not all IT services are created equal and some have a much more marked and noticeableimpact than others.

Risk of missing SLA targetsOne of the traditional metrics included in SDI’s Service Desk Certification performance measures criteria ispercentage of incidents fixed within SLA. This is a reactive metric as it looks at past events, and while thishindsight can be useful in planning future improvements, being proactive allows visibility before the event.The risk of missing SLA targets allows the business to prepare for the potential of missed targets and planaccordingly. If SLAs are going to be missed because of change, this can be explained to the business. Someof these changes will be unavoidable, but to strengthen the business and Service Desk relationship, it’simportant to show that IT is making the business aware.

Some targets might be missed because of lack of resource. In this instance, this metric becomes not justabout sharing information with the business but actually provides a critical opportunity to ask the business forextra resource to try to prevent targets from being missed.

Presenting Business Value MetricsPresenting metrics in isolation can be misleading and can distort the realities of the service being delivered.The example graph below shows a different story when different aspects of service are compared, enablingthe Service Desk to better review service quality and amend processes and procedures based on statisticaldata, which will contribute to an organisation’s drive for continual service improvement.

14 DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part Three: Business Value Metrics

Page 15: Service Desk Value Through Meaningful Metrics

The very first table of this white paper clearly shows the preoccupation of ServiceDesks to provide information relating to ‘look how good we are’! The focus onproviding data to demonstrate business value delivered by IT via the Service Desk isnegligible. Obviously it is important to show that IT and the Service Desk is effectiveand consistent, but ultimately, business management and stakeholders areincreasingly interested in the business value provided. If the services provided do nothelp the business to be more efficient and productive, there is no value to them, andthe business will look to alternatives.

Current Service Desk metrics are almost exclusively focused on how IT and the Service Desk are run, that theyare efficient and productive. There are increasing requirements to provide metrics that identify informationthat can contribute to positive business outcomes as this is what senior business executives care about nothow quickly the phone was answered.

For example, a leading insurance company has started the journey to deliver business value reporting via theService Desk, whereby the information they provide clearly shows the monetary implications to specificbusiness units due to systems downtime. Based on agreed business criteria, reports show the impact of lostsystems availability and financial implications to the business. With this focus, the company can better analysehow a service is designed and compare how a service is actually operating versus how the business needs it tooperate in order for it to meet its business plans.

Each organisation will have individual requirements for business value metrics and reporting, but increasingly,more information is becoming available from which to create a framework. A key issue will be the ease withwhich it is possible for organisations to collate, extract and present this information. However, baserecommendations will be to identify three to five core metrics to measure results relating to services,financials and people across the business, and to deliver information via business value dashboards andreporting that can be easily accessed and consumed by business stakeholders and users.

This initiative is about ensuring that through the Service Desk, IT is better able to present more balancedinformation about the value of the IT services provided.

15DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Moving Beyond the Basics – Delivering True Business Value

Page 16: Service Desk Value Through Meaningful Metrics

16 DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

This section of the white paper contains a description of SDI’s 17 best practicemetrics. These metrics form a part of the performance measures criteria for SDI’sinternational Service Desk Certification (SDC) standard. This standard, revised everythree years, is created by industry professionals from across the globe. It aims toprovide Service Desks with a set of standards that will raise the performance of theirService Desk: the performance measures component of the standard highlights thekey metrics that should be measured to create a comprehensive analysis of ServiceDesk activities.

The SDI 17 best practice metrics

Number of incidents and service requests

Average time to respond

Abandon Rate

Incident resolution time

First Contact Resolution Rate (FCR)

Percentage of incidents resolved within Service Level Agreement

Re-opened incident rate

Backlog Management

Hierarchic escalations (management)

Functional escalations (re-assignment)

Average resolution time by priority

Average resolution time by incident category

Comparison of SLA goals to actual targets

Remote control and self-help monitoring measured against goals

Total cost of ownership

Relative cost per incident by channel

Cost per incident or service request

What do these metrics mean?

Number of incidents and service requests

This measures how many incidents or service requests the Service Desk receives. This can also be brokendown by channel, i.e. phone, email, live chat, in-person etc.

Why it’s importantMeasuring the volume of calls enables you to create an effective and robust staffing model; allow you tosee when your busy periods are by highlighting peaks and troughs; ensure you have enough resources;and understand through what channels your calls are coming in from.

Average time to respond

The standard says: “The Service Desk routinely and consistently collects and analyses the average time ittakes to acknowledge an incident or service request by channel or method (phone, e-mail, user-logged, livechat, SMS, fax, etc.)”

Why it’s importantKnowing how long it takes to respond is a key indicator of how well your Service Desk is performing.Working with this metric and breaking down time to respond by analyst or channel will enable you tomake improvements and identify training needs.

123456789

1011121314151617

1

2

Part Four Metrics Best Practice

Page 17: Service Desk Value Through Meaningful Metrics

17DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Abandon Rate

“The Service Desk routinely and consistently collects and analyses data about the percentage of usertelephone calls that are terminated prior to establishing contact with an analyst.”

Why it’s importantThis is one of the most important metrics because this informs you as to the availability of your ServiceDesk to respond to customers. Understanding the abandon rate will help to inform staffing and resourcemanagement, and will allow you to better plan for peaks and troughs.

Incident resolution time

“The Service Desk routinely and consistently collects data about the average time taken to resolve incidentsand service requests and compares it to the goals/objectives detailed in the Service Level Agreement(s)(SLAs).”

This metric looks at how quickly you resolve incidents and compares these resolution figures to the goals inthe SLA.

Why it’s importantUnderstanding the number or percentage of incidents resolved within each priority category offers aclear indication of how your Service Desk is performing against the obligations and agreements you havewith your customers.

First Contact Resolution Rate (FCR)

“The Service Desk routinely consistently collects and analyses the percentage of incidents and servicerequests that are resolved to the customer’s satisfaction during the initial call or electronic exchangebetween end-users and the Service Desk excluding the entitlement procedure.”

This metrics is fundamentally different to first level (or line) fix rate, which concerns incidents resolved at firstlevel (Service Desk) without being escalated to a resolver team (2nd and 3rd line).

Why it’s importantKnowing the first time fix rate is important as this will give you an understanding of the competencylevel of your Analysts and the type and difficulty of the incidents they grapple with.

Percentage of incidents resolved within Service Level Agreement

“The Service Desk routinely and consistently collects data about the percentage of incidents and servicerequests resolved within the timeframes specified in formal service level agreements.”

This measure allows you to understand how you are performing against the agreements you have with yourcustomers. Service Levels are often classed by priority (P1, P2, P3 etc.) with P1 being the highest priority withthe lowest agreed time to fix.

Why it’s importantMeasuring this metric allows you to ascertain whether the priority levels are correct or if they’reunobtainable. For example, if you are consistently breaching priority levels, it could be the case that theagreed resolution times need to be changed, or that you require more resources to make themachievable.

Re-opened incident rate

“The Service Desk routinely and consistently collects data about the percentage of closed incidents andservice requests subsequently re-opened for additional follow-up.”

This will benefit from some analysis on what incidents have been re-opened to gain a better understanding ofwhy the re-open has occurred.

Why it’s importantUnderstanding why incidents have been re-opened is important because it identifies if there is a trainingneed to explain why incidents were not closed in a satisfactory way. Examining re-opened incidents alsohelps to inform the process for closing incidents. If lots of incidents are being re-opened, it suggeststhey are not being closed correctly – if the reverse is true, it suggests the fixes provided are satisfactoryor that incidents are not being re-opened when they should be (it’s being logged as a new incident) orthat customers do not have a large enough window to offer their opinion on whether the fix wasadequate.

4

5

6

7

3

Part Four: Metrics Best Practice

Page 18: Service Desk Value Through Meaningful Metrics

18 DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Backlog Management

“The Service Desk routinely and consistently collects data about the total number of open incidents orservice requests compared to their age.”

It’s worth considering assigning someone to monitor the backlog data to see why calls are still outstandingand how they will be resolved. This is called the TRIAGE process.

Why it’s importantUnderstanding what calls are still open and why is an incredibly useful process as it allows you to identifyif calls are being closed correctly; if calls are being escalated correctly; what action needs to be taken toresolve the open incidents; and why these incidents have not been resolved thus far. Backlog data canalso identify if there is a lack of resource on the Service Desk.

Hierarchic escalations (management)

“The Service Desk routinely and consistently collects data about the percentage of incidents or servicerequests escalated to management in order to avoid a developing SLA breach.”

Why it’s importantIt’s important to measure how many incidents are escalated to management as this will help to identify ifthere are any training issues. It will also allow you to see how much resource is being taken bymanagement fixing incidents and handling customer complaints and feedback.

Functional escalations (re-assignment)

“The Service Desk routinely and consistently collects data about the percentage of incidents, and service orchange requests transferred to a technical team with a higher level of expertise in order to avoid an SLAbreach developing.”

Functional escalations are distinctly different from hierarchic escalations in that this type of escalation is toanother team, and not management. Functional escalations will be incidents that are passed to resolver teams(2nd and 3rd line).

Why it’s importantMuch like hierarchic escalation, functional escalation enables you to understand the number of incidentspassed to the resolver teams, and trends during a period of time. It will enable you to see if trainingcourses have had an effect on the number of escalations to resolver teams, and can be useful inidentifying future training needs. By manually looking through some of the incidents escalated, you willbegin to understand what incidents most commonly require external assistance and whether training forthe 1st line team will be beneficial.

Average resolution time by priority

“The Service Desk routinely and consistently collects data about the average length of time taken to resolveincidents analysed by their priority.”

Why it’s importantThis metric enables you to see if the priority categorisations are correct and if you are meeting yourtargets on a regular basis. It is important to look carefully at the exceptions to understand why they havebreach and what can to done in the future to prevent them from breaching again.

Average resolution time by incident category

“The Service Desk routinely and consistently collects data about the average time required toprocess/resolve a user incident or service request based on incident or service request type.”

This metric looks specifically at incidents that are resolved within the set delineations the Service Desk hasdecided. These might typically include categories for password resets, e-mail problems, hardware errors, etc.This metric is distinctly different from resolution by priority type.

Why it’s importantMeasuring the resolution time by incident category allows you to identify the most common incidentsand how quickly they are resolved. It’s important to look at the exceptions to see what incidents haveexceeded their goal or target time for resolution. Recording incidents by category also allows you tobuild a list of the most common incidents your Service Desk attends to. You’ll also be able to see whattype of incidents take the most time to resolve and which ones are quick fixes.

9

10

11

12

8

Part Four: Metrics Best Practice

Page 19: Service Desk Value Through Meaningful Metrics

19DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Comparison of SLA goals to actual targets

“The Service Desk routinely and consistently collects data about its service level commitments and comparesit to its actual performance results.”

Why it’s importantMany Service Desks live or die by their performance to SLA targets, and if the wider business isinterested in metrics, this is the one it tends to hone in on. From a management perspective, comparingperformance against SLA can be invaluable in helping to identify areas for improvement andunderstanding strengths and weaknesses.

Remote control and self-help monitoring measured against goals

“The Service Desk routinely and consistently collects data about the frequency that remote control tools areused and the number of times that self-help tools assist in incident and service request resolution comparedagainst goals.”

This can be a difficult metric to record. There are two ways to tackle this problem. Do some development workto include a ‘flag’ that analysts can tick if they have used remote support. For self help you could have a tick boxfor users to tick to indicate whether the article/guide/FAQ was useful or not. Developing this further, a ratingsystem could be adopted, allowing users to make comment on the quality and accuracy of the information.

Why it’s importantMeasuring remote control usage is vital as it provides a real insight into the abilities of your analysts.Also, who is using remote support? What incidents is it most successful at fixing? What customers canremote support – are there some that refuse to allow analysts to connect to their machine in this way?

Measuring who uses remote support identifies any nascent training needs – if it’s not being used, whynot? Do analysts know how to use it? These are revealing findings and will help to train and educateyour Service Desk.

For self-help, it’s important to understand the effectiveness and quality of the information you havemade available as this will help to refine and shape future articles and information. Also, you want to beable to identify if this information is being used and whether more marketing needs to be done to helppromote the availability of self-help to the user population.

Total cost of ownership

“The Service Desk routinely and consistently collects data about the total support cost of each contactand/or customer.”

Total cost of ownership refers to the total cost of running the Service Desk.

Why it’s importantQuite simply, it’s vital to understand how much the Service Desk costs to run. Only throughunderstanding these figures can you discover whether there is the money available for increasingresources or increasing spending in other areas. Measuring, tracking and trending the cost of ownershipwill enable you to ascertain if your Service Desk has made any cost or efficiency savings.

Relative cost per incident by channel

“The Service Desk routinely and consistently collects data about the relative cost of Service Desk operationsby channel i.e. telephone, email, live chat, SMS, fax, walk-ins etc.”

A simple method for calculating costs:

Average Cost per call/per e-mail

This, along with cost per e-mail, is the essential metric to grapple if you want to determine the value of yourService Desk, yet, as revealed in part one of this white paper, only 10 per cent of Service Desks measure thismetric.

Things to consider:

To give an accurate and fair measurement the cost of second and third line support should be included.

Determine which measures should be incorporated to give the final figure. For example, some intangible measuresshould be given a weighting and added to the final total such as call waiting time or informal peer support.

You will also need to know your staff costs to get an accurate handle of call costs.

14

15

16

13

Part Four: Metrics Best Practice

Page 20: Service Desk Value Through Meaningful Metrics

20 DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

A method for calculating cost per call/per e-mail

Some companies use the actual Service Desk budget to calculate cost per call. In essence, they include everycost involved in running the Service Desk and divide this by the number of calls received. This method is alittle too simplistic for what we’re really looking for in the cost per call metric, but it does highlight why acomparison of metrics is so difficult.

Others will include every cost involved in taking the call. They will include postage costs if hardware needs tobe replaced to. They might include the cost of using technicians or field agents. Some will include the costassociated with the loss of productivity created by the user being on the telephone. This is why there is such ahigh variation in the reporting of this metrics. This is a much more involved way of measuring the metric, but itmay also be more informative. If a value can be placed on productivity loss, it will be clear how vital theService Desk is to the operation of the business. If you can report that your desk saved x amount ofproductivity, this will place your desk in a very strong position.

The Formula

There are lots of different ways this metric can be measured, but here is one of the best, all-encompassing ways:

The all-encompassing way

Explanation

Ultimately, you want to understand your Service Desk staff cost, broken down into as small a unit as possible.Your HR department can tell you all the components you will need to measure this: salary, benefits, heating,lighting, equipment and any other measures that you think should be included. From this data, you can thenwork out how much an Analyst costs to employ per minute.

Add to this figure the lifetime cost of software support including support and maintenance. You can split thecosts over three years to give you some idea of what it actually costs to run the systems.

You might want to add hardware costs and the cost of using second and third line (although of course, youcould have Analyst cost per call, second line cost per call etc.)

Adding up the above will give you the cost per call/e-mail per minute, which then needs to be multiplied bythe time duration of the call/e-mail.

All costsassociated with

providing support(including heating, lighting, rent,salaries, hardware, software etc.)

÷ =number ofAnalysts

Then...

Cost per Analystper minute

Cost per Analystper minute

Time taken toresolve and

close an incident

Cost per call or email=x

Part Four: Metrics Best Practice

“Service Desks must know whatwe can add to the business’

potential to enhance andexpand further.”

HOWARD KENDALL, FOUNDER, SDI

Page 21: Service Desk Value Through Meaningful Metrics

21DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

The simple way

Explanation

The formula is essentially the same, except the number of included costs is significantly reduced. While thismight give you a number that is less accurate, is does not mean that it will be less useful. Ultimately, whatevermeasure you choose needs to provide you with an indication of whether your costs are going up or downover time. Both of the suggested formulas will provide you with a ‘stake in the ground’ and a usefulbenchmark from which to analyse your support costs over a period of time.

Comment from Tony Ranson, Independent Consultant

Why do service desks find it difficult to measure cost-based metrics?

“I’m not sure Service Desks find it difficult to measure costs, but I think so few do because they have not yetreached that level of maturity: it’s simply the case that they have not thought about costs yet. For manyService Desks, the thought of measuring costs is like a new science, and the approach to measuring costs canbe overwhelming. What I would suggest is that service desks approach measuring costs by making a stake inthe ground, which is fundamental to see if you are getting better or worse. There is time to finesse andimprove over time through continual service improvement.”

Cost per incident or service request

“The Service Desk routinely and consistently collects data about the cost per incident and service request ofthe Service Desk’s operations (including people, support infrastructures, and overheads).”

For this metric, you can use the same formulas as for cost per incident per channel, except in this instance,you’re looking at the total cost of incidents and service requests.

Displaying Metrics Information – SDC best practice guidelines

The graph below is a best practice method for displaying metrics data. The graph should contain data for a12 month period; have a goal or target line; show the trend of the data; and be presented in a clear, conciseand consistent standard format. Metrics should be presented in a format acceptable and clear to thebusiness and adheres to corporate guidelines.

17

Then...

Cost per Analystper minute

number of minutes

worked ÷number of Analysts =÷Total salaries

of ServiceDesk staff

Cost per Analystper minute

Time taken toresolve and

close an incident

Cost per call or email=x

Part Four: Metrics Best Practice

Page 22: Service Desk Value Through Meaningful Metrics

Service Desks need to understand the value they offer to their users and the business.As the survey results show, adherence to SLA targets and customer satisfaction arekey metrics for Service Desks, and these metrics are true determinants of businessvalue. The other component of value is the cost aspect. The survey reveals that alarge percentage of Service Desks do measure some aspects of their costs and relyon these measures to truly understand the value for money they offer to the business.

Why is it important to understand the value of IT to the business? If you don’t know your value, it’s muchharder to justify any staff or budget increases for your Service Desk. If Service Desks are unable to measureand communicate their value, the stigma of IT being a ‘cash drain’ will remain in vogue. Furthermore, withcompanies tightening their belts, reducing spending on IT might be near the top of the cull list. If companiesdon’t know the value of IT, it will be more likely that Service Desk budgets will be cut in preference to areas ofthe business that do provide tangible evidence of their value.

Why do Service Desks find it difficult to establish their business value? Firstly, they find it hard to establishwhat metrics they should be measuring, how to measure them, and what they should do with the results.Secondly, the majority of Service Desks are concerned with reporting how they spend money, not ondetermining the value that these expenditures actually provide. The good news is that fixes to these problemsare relatively straightforward: with a few simple metrics measurements, the business will have a much greaterunderstanding and appreciation of the Service Desk’s value.

Secondly, the Service Desk – as its name implies – is primarily concerned with delivering a service to its usersand customers. One of the ways to achieve this is to manage expectations by setting contracts in place. Indoing so, both parties know what they are expected to deliver and when the resolution can be expected.Measuring this will demonstrate if these deadlines are consistently being met, and therefore, if a good serviceis being delivered. If targets are consistently breached, this indicates there are some key problems that needto be addressed as a matter of urgency – if you are not measuring this data, you will not know if this is thecase.

The ultimate answer to the question of ‘Why do metrics matter?’ is that if you don’t know how much yourservices cost or whether you deliver a good service that meets expectations, you can expect some trickytimes ahead. It has never been more important to demonstrate your value from a financial and customerperspective.

22 DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Why do Metrics Matter?

Page 23: Service Desk Value Through Meaningful Metrics

It is clear from the survey results and the discussion of business value metrics thatmetrics is a big Service Desk topic. Lots of debate exists around the most importantService Desk measures and of course, these vary depending on a Service Desk’sstructure, goals and the types of organisation and users supported. The commonalityis found in the drive to demonstrate value: this explains why adherence to SLAs andcustomer satisfaction featured so prominently on the list of metrics important to theService Desk and the business.

Service Desks today look to demonstrate or prove they provide value for money and that they can justifyfurther investment and expansion. Metrics play a crucial role in delivering this message as they offer tangible,empirical evidence that the Service Desk is delivering a quality service. When so much of IT is difficult todefine and accurately cost, metrics play a crucial role.

It is also clear that metrics are evolving and maturing away from demonstrating performance todemonstrating core business value. Understanding the role that IT plays in delivering value to the business –in terms of supporting users, ensuring availability and mitigating risk against IT failure – are keyconsiderations, and we can expect businesses to look more and more towards tangible demonstration ofthese values. It is heartening to find that so many Service Desks measure some metrics and that many of theseare considered industry best practice measurements – it is these metrics that will provide the strongfoundations for future business value measurements.

23DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Conclusion

Page 24: Service Desk Value Through Meaningful Metrics

24 DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Simon Middleyard, Joint Head of Service and Infrastructure, Government OrganisationSimon is responsible for all aspects of IT service and desk side support and has been in post for 18 months.He has a strong service management background having previously been a Service Desk Manager. HisService Desk has experienced significant change in the 18 months that he has been in the role. In this time,they have also implemented SLAs, event surveys and other KPIs. The service desk supports 1000 users.

The key metrics18 months ago there were no metrics in place, and customers and the organisation had no expectationsaround the level and quality of support they should receive from the Service Desk. This changed when theService Desk expanded from one to three Analysts. The Financial Director wanted metrics that woulddemonstrate that the heavy investment in recruitment was worthwhile: essentially he was interested inunderstanding if the Service Desk was offering value for money. As a result, Simon and his team implementedSLAs with four priority levels and an event based survey sent to users after every call is closed. Prior to theseadditions, the service desk did not have any metrics in place. Performance against SLA and customersatisfaction provides the Business Director with a general flavour of the service provided.

Sharing metricsSimon’s team share metrics on a weekly basis at their Wednesday morning meeting. During this meeting,metrics are discussed, and on a monthly basis, these metrics are passed up to Director level. The Director isprimarily interested in if the Service Desk is missing SLA targets as this is their primary measure of ServiceDesk performance. They also share customer satisfaction results graded on a 1-4 scale.

Cost based metricsCurrently, Simon’s team does not measure any cost based metrics. He thinks it would be difficult to create acalculation as their tool has been developed in-house and does not currently record the information required.He believes that Service Desks find it difficult to measure cost based metrics as often, they do not knowwhere to start. Simon’s team thinks it is able to demonstrate value through adherence to SLAs and customersatisfaction, and these metrics help to justify the investment in additional staff.

David Lee, Service Desk Team Leader, Northumbria HealthcareDavid has worked on the Service Desk for the past six years. He started as a Service Desk Analyst beforemoving into desktop support. He became Service Desk Team Leader two and a half years ago. His teamconsists of six first line Analysts that support 9000 users.

The key metricsUp until 2011, the service desk only used to measure the number of logged tickets and found that this metmany of its reporting objectives as it enabled the desk to understand the volume of work and resourcesneeded. The turning point came when David attended an SDI metrics event in Manchester. Hearing thepresenters and attendees experiences of metrics proved to be a real eye-opener for David, and he created asuite of seventy five metrics. The key metrics for David – and the ones that help him understand the servicedesk’s performance – are total contacts (number of incidents and service requests); average ticket turnaroundtime; and resolved within SLA.

Sharing metricsDavid creates a monthly metric report containing a selection of his seventy five metrics. This report containsgraphs and commentary and is distributed amongst the Service Desk team. Beyond this, the Head ofDepartment also receives a monthly report compiled from the data provided by each of the Team Leaders.The author of this report decides which metrics to include, but of greatest interest to the business is theperformance against SLAs: indeed David notes the business is more concerned with adherence to SLA thanhow busy they are and how many tickets they log. David noted that it can be difficult to create metrics reportswithin their ITSM tool, although he has managed to automate many of the metrics included in his report.Including other metrics would require delving into SQL.

Part Five Interviews

Page 25: Service Desk Value Through Meaningful Metrics

25DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part Five: Interviews

Cost based metricsCurrently they do not measure any cost based metrics although David is interested in doing so. He would beinterested in some additional guidance as formulas he has looked at have been complicated and unwieldy.

Lauren Conrad, Service Desk Coordinator, LucidicaLucidica are a managed service provider based in London. They support small companies typically withbetween 1-50 employees. Established in 1999, Lucidica has grown its business by offering support with apersonal touch and strong customer service.

Lauren Conrad is a Service Desk Coordinator and looks after a team of seven Engineers – she has been in therole for the past two years.

The key metricsEvery morning, Lauren looks at the previous day’s open and closed calls and calls not started, which are jobslogged but have had no action taken on them. The open and closed calls allow Lauren to see the backlog andwhich calls she needs to assign an Engineer. There are two types of SLA, one for contract clients and the otherfor ad hoc work. Lucidica also measures customer satisfaction by conducting a telephone interview with arandom selection of 20 per cent of their closed calls – this allows her to identify if calls have been closedcorrectly and resolved to the client’s satisfaction.

Sharing metricsLucidica does not share metrics with clients as any problem areas are identified through account managersand primary contacts. Clients don’t tend to ask for metrics reports, and the business ethos is around buildingrelationships. However, Lucidica does produce an annual report for every client informing him/her of currentproblem areas, current SLAs and call volumes.

Cost based metricsLucidica do not measure any specific cost based metrics although every Engineer records the time spent oneach job and an annual net rate bill is produced – this allows Lucidica to understand the profitability of eachclient and adjust resources as necessary. This report might also prompt changes to SLAs or highlight clientsthat are having lots of problems.

Gary Adams,Service Desk Manager, NHS HertfordshireGary has worked on NHS Hertfordshire’s Service Desk for the past nine years and has been Service DeskManager for eight of those. The Service Desk supports 8,000 users and is open 365 days a year, 7.30 – 22.00.The Service Desk has undergone a remarkable transformation during Gary’s time having progressed from alog and flog desk with a First Time Fix (FTF) rate of 12 per cent to a current rate of more than 60 per cent.The desk has also increased in size from four to fourteen people, and their user population has increasedalong with their geographical scope.

The key metricsOn a daily basis Gary focuses on FTF rate, First Level Fix rate, SLA performance, Average Speed to Answer,and total points of contact (total number of interactions). The Service Desk is currently averaging 12,000points of contact per month. For Gary, First Level Fix was a really interesting metric as it enabled them todiscover how much the Service Desk could fix without escalating to second or third line.

The business is driven by what the Service Desk says is important. However, the business does focus onresponse and resolution within SLA and the availability and uptime of hardware.

Cost based metricsGary has looked at cost based metrics but has not yet created any concrete measures. Gary thinks creating

Page 26: Service Desk Value Through Meaningful Metrics

26 DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

cost based metrics is difficult but is interested to understand and learn from others who currently have thesemeasures in place. Going forward, Gary is aware that cost based metrics will be increasingly important as hisService Desk will be asked to justify its value and demonstrate it has a clear understanding of costs.

Fiona Campbell,IT Customer Services Manager, Customer Protection AgencyFiona’s role covers all elements of internal support and some external support. She manages two desktopengineers, five Help Desk Analysts and one Team Leader.

The key metricsThis was an interesting question for Fiona. Up until last year, Fiona had a strong focus on metrics and lookedto measures such as number of incoming contacts, calls handled, average response time, and quality of calls.However, a new IT Director joined the organisation and advocated a different focus and approach and askedFiona’s team to spend less time on metrics. He believed that as long as the service was working, there was noneed to spend time measuring metrics. As a result, Fiona has moved away from metrics and instead relies oncustomer feedback to assess how effectively her team is delivering support. Fortunately, customers are veryvocal and thus provide Fiona’s team with an honest and candid appraisal.

Fiona still measures metrics but does not report on them. She has found this difficult as metrics formed a keycomponent of the appraisal process and one-to-ones. Now, management is performed using ‘gut feel’ ratherthan empirical data. However, Fiona’s team only has fifteen per cent of its calls outstanding at the end of thework day so this demonstrates to her that they are working effectively. She also receives positive feedbackfrom her customers, which to her, makes the job worthwhile.

Jason Kearney,Service Delivery Manager, Orbit ServicesJason has been Service Delivery Manager at Orbit for the past 18 months. During this time, he has movedthe Service Desk function out of its previous position within the Customer Services team and into its owndepartment. The motivation for this was that the business was reporting bad feedback and lots of calls wentunanswered. The Service Desk has around 14 people, including first and second line. His team supportaround 1,200 users with a large percentage being mobile workers.

The key metricsJason’s key metrics are driven by the demands of the business. He focuses on availability, hardware requests(volume and turnaround time) and SLA performance centred around time to restore services, fulfil requestsand time to resolve. Their current SLA performance is 90 per cent for incident management and around 95per cent for request fulfilment. Jason’s team also has a metric for training requests fulfilment with internaltraining provided within 21 days of the request: they are currently hitting 90 per cent for this measure. A goodindicator of the increased performance of the Service Desk following its relocation from the CustomerServices was that the First Time Fix rate increased from 1 per cent to 50 per cent.

Sharing metricsMetrics reports are produced on a monthly basis and are shared with the business. There is also a quarterly ITstrategy board meeting during which the business is asked what it wants from IT, and from these discussions,a rolling two year plan is created. A good example of this feedback: the average speed to answer time wasincreased from 20 seconds to 40 seconds as the business told the Service Desk it was happy to wait longer ifthere was a better chance of incidents being resolved.

Cost based metricsCurrently, Orbit’s Service Desk does not measure any cost based metrics, but this is definitely something theyare looking at in the future.

Part Five: Interviews

Page 27: Service Desk Value Through Meaningful Metrics

27DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

Part Five: Interviews

Richard Haslam,Service Desk Coordinator, Royal College of PhysiciansRichard’s Service Desk is comprised of eight people, with two on the first line. They support 450 staff locatedacross thirteen sites. They handle 8,000 interactions a year. Richard has been in his current role for the lastthree years.

The key metricsFor Richard, the metrics he focuses on are call to logging time (how long does it take for calls to be loggedinto the system), time to close and performance against SLA. Beyond this, Richard drills deeper and looks forwhat type of incidents are received and then takes the proactive approach of trying to minimise futureoccurrence. He accomplishes this by identifying any training needs or if the hardware being used is faulty: bytaking this proactive approach, he is looking to minimise the volume of these incidents in the future.

Sharing metricsMetrics are shared in a monthly report shared with the Head of Operations who then passes it on to the CTOand other senior management. Senior management is interested in seeing if the Service Desk is meeting itsSLA targets, and Richard and his team keep on top of the SLAs by monitoring any SLA breaches and followingup on the reasons behind the failures. There is a strong development culture, and Richard and his team areconstantly looking at ways to improve.

Cost based metricsRichard has calculated the cost per call and email by dividing the salaries of the team by the time it takes tocomplete the work. Using this measure, he can calculate the cost of each piece of work and estimate howmuch can be saved through education and training programmes. This measurement has provided Richardwith a stake in the ground so he can see if his support costs are changing over time. It has also helped tomove away from the perception that the Service Desk is a resource that can be infinitely consumed.

Page 28: Service Desk Value Through Meaningful Metrics

Cherwell Software is one of the fastest growing IT service managementsoftware providers. It has corporate headquarters in Colorado Springs, Colo.,U.S.A.; EMEA headquarters in Wootton Bassett, U.K.; and a global network ofexpert partners. Cherwell Software is passionate about customer care and isdedicated to creating “innovative technology built upon yesterday values.”

Its award-winning flagship product is Cherwell Service Management™, a fully-integrated service managementsoftware solution for IT and technical support professionals with out-of-the-box PinkVERIFY accredited ITILprocesses and wizard-driven customisation that allows customers to tailor the tool to match their processeswithout writing any code. Cherwell Service Management offers unmatched flexibility in hosting andconcurrent licensing for low total-cost-of-ownership.

www.cherwell.com

Founded in 1988 by Howard Kendall, the Service Desk Institute (SDI) is theleading authority on Service Desk and IT support related issues, providingspecialist information and research about the technologies, tools and trends ofthe industry. It is Europe’s only support network for IT Service Deskprofessionals, and its 800 organisation members span numerous industries.

Acting as an independent adviser, SDI captures and disseminates creative and innovative ideas for tomorrow'sService Desk and support operation. SDI sets the best practice standards for the IT support industry and isthe conduit for delivering knowledge and career enhancing skills to the professional community, throughmembership, training, conferences, events and its publication SupportWorld magazine. It also offers theopportunity for international recognition of the support centre operation through its globally recognisedService Desk Certification audit programme.

www.servicedeskinstitute.com

The show is comprised of an exhibition of the latest products, services and solutions from some of the world’s leading suppliers, and a world-class education programmeincluding over 40 seminars, keynotes, briefings and discussions. With over 4,500 ITSM professional converging, it’s a great networking opportunity and there is no charge to attend.

www.servicedeskshow.com

28 DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

About Cherwell Software

About The Service Desk Institute (SDI)

About SITS – The Service Desk & IT Support Show

Page 29: Service Desk Value Through Meaningful Metrics

29DEMONSTRATING SERVICE DESK VALUE THROUGH MORE MEANINGFUL METRICS

NOTES

Page 30: Service Desk Value Through Meaningful Metrics