Thursday, March 13, 2008

User Experience Metrics

Metrics-driven organizations help staff focus on what's important by focusing on metrics.

I want to use the power of metrics to drive our organization toward a strategic goal of providing a fabulous user experience. Toward that end, I'm drafting an approach that I'll lay out here, including the implications for both management and for product/development teams.

Step One--Incent management

I've worked with senior management in my group to include user experience (UX) metrics in the annual goals of managers in each product line. They, in turn, will put these metrics into the goals of their product owners. Here's the language for those goals:

Prior to going live, [product/functionality] is ranked by users on the Standard Usability Scale (SUS). Threshold = 70/100. Target = 90/100.

On the live web, the User Experience Indicator for [Audience_segment] will be xx or higher by [date].

To be clear, salaries and bonuses are riding (in part) on these goals. If I'm a manager or product owner, I get more money if I provide a great, usable, experience for my end users.

The remainder of this document introduces these goals in more detail.

Standard Usability Scale
Language for Objectives

Prior to going live, [product/functionality] is ranked by users on the Standard
Usability Scale (SUS). Threshold = 70/100. Target = 90/100.

Standard Usability Scale (SUS) is a widely used instrument for measuring usability. The SUS focuses on just one aspect of the user experience: usability. It asks users the degree to which they agree or disagree with each of 10 questions:

  • I found the web site unnecessarily complex.
  • I thought the web site was easy to use
  • I think that I would need the support of a technical person to be able to use this web site.
  • I found the various functions in this web site were well integrated.
  • I thought there was too much inconsistency in this web site.
  • I would imagine that most people would learn to use this web site very quickly.
  • I found the web site very cumbersome to use.
  • I felt very confident using the web site.
  • I need to learn a lot about this web site before I could effectively use it.
  • I found this site was easy to use.

This score will normally require testing with 5-12 users. Our User Experience Team can help a product team determine the optimal number of users. We also plan to build the SUS into virtually all usability tests, and we plan to make usability tests easily accessible to product teams. We want it to be super-easy for a team to know the current SUS score of its product.

Appropriate Use
Use SUS in 2 places:

  1. on an increment of functionality, prior to going live.
  2. on an entire web experience that is already live, as part of the User Experience Indicator for a given audience segment (see “User Experience Indicator” below)

SUS should be administered under controlled circumstances, typically as part of a usability test

The User Experience Team also recommends using SUS as exit criteria for a sprint or release; essentially, an SUS score of 90 is part of the definition of “done.”

User Experience Indicator
Since the SUS measures only usability, we need at least one more metric to measure overall user experience. We have not finished defining a standard metric for this, and we do not have sufficient baseline data, so our 2008 objectives will be to create baselines that will enable us to set hard targets in 2009 and beyond.

Language for Objectives
2008: For [audience_segment], establish a baseline measurement of the User Experience Indicator.
2009 and beyond: The User Experience Indicator for [Audience_segment] will be xx or higher by [date].

The Metric Being Created
The User Experience Indicator will most likely be a combination of 3-5 questions, most likely addressing the following dimensions:

  • Would you recommend this web site to a friend?
  • How satisfied were you with this site?
  • Compared with other web sites, how well did this site meet your expectations?
  • System Usability Scale

A key piece of our organization's strategy is to provide an unexpectedly enjoyable experience for our users—we want them to say, “wow!” Our assertion is that by combining answers to these questions, we will be able to elicit a reliable measure of the overall user experience, including the “wow factor.” If we have hit the mark with the wow factor, responses to all four of these dimensions will be very favorable.

Appropriate Use
These metrics are best associated with the overall experience of a segment of users (e.g., member, brokers, etc.), with a site already live, rather than for an individual feature or a site in development.

The User Experience Team will create a simple formula for combining these 4 measurements into a single User Experience Indicator and will make it easy for product teams to produce a User Experience Indicator for their product. We hope to have this methodology ready in Q2 2008.

Background & Rationale
The User Experience Indicator is a very high-level metric. It measures our success at providing a great user experience, but it is not intended to tell us why a user’s experience was good or bad. We have a whole collection of tools available to dig deeper into the “why” questions. This metric gives us one simple indicator of how an audience’s experience is improving over time and in relation to other audience segments. Here’s the significance of each dimension of the Indicator:

Would you recommend this web site to a friend?
This is based on the work of
Frederick F. Reichheld, as originally published in the Harvard Business Review piece entitled The One Number You Need to Grow. It has since been adopted by a wide range of industries and is gathering steam as a standard indicator of business success. This is the basis of the “Net Promoter” discipline. Here’s a brief synopsis of the original article:

Companies spend lots of time and money on complex tools to assess customer
satisfaction. But they're measuring the wrong thing. The best predictor of
top-line growth can usually be captured in a single survey question: Would you
recommend this company to a friend? This finding is based on two years of
research in which a variety of survey questions were tested by linking the
responses with actual customer behavior--purchasing patterns and referrals--and
ultimately with company growth. Surprisingly, the most effective question wasn't
about customer satisfaction or even loyalty per se. In most of the industries
studied, the percentage of customers enthusiastic enough about a company to
refer it to a friend or colleague directly correlated with growth rates among
competitors. Willingness to talk up a company or product to friends, family, and
colleagues is one of the best indicators of loyalty because of the customer's
sacrifice in making the recommendation. When customers act as references, they
do more than indicate they've received good economic value from a company; they
put their own reputations on the line. The findings point to a new, simpler
approach to customer research, one directly linked to a company's results.

We hypothesize that a user with a “wow” experience is more likely to say they would recommend the site to a friend.

How satisfied were you with this site?
While the “would you recommend” question has many proponents, there are also those who argue it does not adequately address overall satisfaction. E.g., maybe I would recommend this site because it’s the only place in the world I can buy a particular product, even though my experience of the web site itself is horrible. Or maybe I love the site, but I wouldn't recommend it to a friend because it's not relevant to any of my friends (or colleagues). Because of these limitations, we add a basic satisfaction question.

A key limitation to this kind of generic satisfaction question is that it doesn't help us understand why they're satisfied or dissatisfied, so I've heard concerns that a satisfaction score isn't useful. However, in this instance, we're using the score as part of a measure of success, rather than as formative research. If my satisfaction scores are not high enough, I am then incented to figure out why my users are dissatisfied and what I can do about it. We have lots of other tools available to help with those tasks.

The User Experience Indicator will be particularly sensitive to the extremes on this scale. Since our goal is to produce an unexpectedly enjoyable experience, we don’t want people to be only somewhat satisfied, we want them to be thrilled (wow!). So we will initially aim for a threshold of “satisfied or extremely satisfied,” but we will quickly move to a target of changing users from “satisfied” to “extremely satisfied.” For example, in a November 2007 survey of registered members of, 86% said that they were either satisfied or very satisfied with This looks very good and would be a very difficult number to improve. But a closer look at the data shows that this 86% is a combination of 49% satisfied and 37% very satisfied. A meaningful User Experience Indicator would incent teams to increase the percentage reporting “very satisfied.”

Compared with other web sites, how well did this site meet your expectations?
This dimension adds two elements not covered by the previous two:

  • Comparison to other web sites (across industries). These are a critical to the context in which users access our web sites. It’s important to go across multiple industries, because their perceptions are ultimately based on their experiences with their favorite sites (shopping, banking, blogging, etc.), rather than only with our direct competitors.
  • Experience in relation to expectations. Our strategy is to provide an “unexpectedly enjoyable” experience, so we want to find out how well we did relative to what they expected. As users’ expectations increase, we will need to continue innovating to stay ahead of their expectations. This dimension helps us understand the “unexpected” part of the “wow” factor.

System Usability Scale (SUS)
See above for an introduction to this metric. When included in the User Experience Indicator, the SUS score applies to an entire web presence for a given audience, rather than to an increment of functionality. It is measured in a production environment.

The User Experience Team intends to provide a framework that makes it easy to regularly measure the SUS for each major audience segment.

Next Steps

To use metrics effectively, and organization needs to do 4 things:

  1. Define metrics that measure what's important
  2. Make it easy to measure these metrics
  3. Make these metrics widely visible
  4. Formally incent staff to meet targets for these metrics

This post is an initial crack at #1 and #4. The User Experience Team will create the User Experience Indicator methodology. As we all collect and compare the resulting data, we will analyze it for validity and will refine the methodology over time.

How does this sound to you? Please comment.


Directness - Adam Dorrell said...

we have a lot of experience measuring experience using Net Promoter Score metric. It's a simple number but can really help you rank what to focus on.

We have had most success with "emotionally important" purchases. By this we think that to get a real NPS measurement the customer has to be involved - let's say spending more than $50 - on a piece of electronics, flight, car etc.

I'm not sure that you will get too much feedback unless money changes hands, but I think you should try to measure the website and use incentives as you suggest. In any case, sharing customer feedback with management is worth it.

If you need help to measure we would be delighted to assist.


Karen said...

Thank you for sharing this plan. I look forward to reading how implementing it goes and how your management receives the metrics.

You noted that your satisfaction rating "doesn't help us understand why they're satisfied or dissatisfied." Usability testing, certainly, will give you the best answers to this. However, consider also adding "what you like best/least" questions to your satisfaction scale. This type of question was recommended to me some years ago (I can't recall the source, unfortunately) as a more subtle way to detect satisfaction that is slightly less subject to social bias. Since questions given after a test necessarily rely on short-term memory, the answers are likely to reveal what the user noticed and was most affected by. The more positives noted, the higher the likely satisfaction.

When I've administered post-test questionnaires or interviews with these questions, I've always found it instructive to see how fast the answers come for one over the other. Admittedly, the answers are not perfect measures of product success or failure. Also, they will only partially answer your question of what caused dissatisfaction. However, coupled with your other metrics, I think you could get a lot of mileage from asking for this information.

Another way to analyze the results is to look for words that pop: "loved," "hated," "cool," and so forth. Developing a weighted scale for ranking typical or desired phrases can offer a supporting measurement of satisfaction.

You may already do something like this as part of your testing protocols, but if you are looking for a way to diffuse objections, I hope this suggestion can help.

You also mention at the end that the User Experience Team is creating this as a "framework that makes it easy to regularly measure the SUS…". Do you intend this framework to be applied by your team or do you intend it to be a tool for other managers to use, only engaging your team for analysis? From your post about having a small UX team, I would think that this has to be a tool usable by product teams that you might not work directly with in order for these metrics to become a management incentive. If you do not currently work with all product lines and projects, is this a way to make increased involvement more manageable with existing UX resources? I'm very interested in how the adoption of this will proceed.

Thanks again for a thought-provoking post. I wish you success.

Bill Albert said...

You bring up a lot of interesting points. Tom Tullis and myself just completed the first book devoted to this topic. It is called "Measuring the User Experience". I know it is a shameless plug, but you might find it interesting. We cover a lot of the metrics you discussed here.

Tim said...

Thanks for the shameless, Bill. Looks like you've put together a very important resource, and I'll check it out.

fritz said...

A couple years later, and this is still as relevant as ever, Tim. After having worked with many UX teams (including Kaiser's) I can honestly say that your recommendations on how to integrate qualitative & quantitative data worked out better than most. It is a fine art that requires that the UX team shepard product owners and development teams in their understanding and use of the data. Too often, the SUS (or other qualitative survey or measurement) can override product decisions and directions by overshadowing the interpretations and analysis of the sticky 'touchy feely' feedback from testing that can really give a product the real usability that takes it from good and usable to great, engaging and fun.

Jimmy Jarred said...

The approach you have followed is convincing. Its important to measure user experience for exploring a business in right direction. Thanks for discussing all about it in detail.
measuring the user experience

Halman Freud said...

That's is an important question. I have made a lot of surveys through whatsapp. And I don't have it in the phone but I have whatsapp for pc and works fine.