Thursday, March 13, 2008

User Experience Metrics

Metrics-driven organizations help staff focus on what's important by focusing on metrics.

I want to use the power of metrics to drive our organization toward a strategic goal of providing a fabulous user experience. Toward that end, I'm drafting an approach that I'll lay out here, including the implications for both management and for product/development teams.


Step One--Incent management

I've worked with senior management in my group to include user experience (UX) metrics in the annual goals of managers in each product line. They, in turn, will put these metrics into the goals of their product owners. Here's the language for those goals:

Prior to going live, [product/functionality] is ranked by users on the Standard Usability Scale (SUS). Threshold = 70/100. Target = 90/100.

On the live web, the User Experience Indicator for [Audience_segment] will be xx or higher by [date].

To be clear, salaries and bonuses are riding (in part) on these goals. If I'm a manager or product owner, I get more money if I provide a great, usable, experience for my end users.

The remainder of this document introduces these goals in more detail.

Standard Usability Scale
Language for Objectives

Prior to going live, [product/functionality] is ranked by users on the Standard
Usability Scale (SUS). Threshold = 70/100. Target = 90/100.


Background
The
Standard Usability Scale (SUS) is a widely used instrument for measuring usability. The SUS focuses on just one aspect of the user experience: usability. It asks users the degree to which they agree or disagree with each of 10 questions:

  • I found the web site unnecessarily complex.
  • I thought the web site was easy to use
  • I think that I would need the support of a technical person to be able to use this web site.
  • I found the various functions in this web site were well integrated.
  • I thought there was too much inconsistency in this web site.
  • I would imagine that most people would learn to use this web site very quickly.
  • I found the web site very cumbersome to use.
  • I felt very confident using the web site.
  • I need to learn a lot about this web site before I could effectively use it.
  • I found this site was easy to use.

This score will normally require testing with 5-12 users. Our User Experience Team can help a product team determine the optimal number of users. We also plan to build the SUS into virtually all usability tests, and we plan to make usability tests easily accessible to product teams. We want it to be super-easy for a team to know the current SUS score of its product.

Appropriate Use
Use SUS in 2 places:

  1. on an increment of functionality, prior to going live.
  2. on an entire web experience that is already live, as part of the User Experience Indicator for a given audience segment (see “User Experience Indicator” below)

SUS should be administered under controlled circumstances, typically as part of a usability test

The User Experience Team also recommends using SUS as exit criteria for a sprint or release; essentially, an SUS score of 90 is part of the definition of “done.”

User Experience Indicator
Since the SUS measures only usability, we need at least one more metric to measure overall user experience. We have not finished defining a standard metric for this, and we do not have sufficient baseline data, so our 2008 objectives will be to create baselines that will enable us to set hard targets in 2009 and beyond.

Language for Objectives
2008: For [audience_segment], establish a baseline measurement of the User Experience Indicator.
2009 and beyond: The User Experience Indicator for [Audience_segment] will be xx or higher by [date].

The Metric Being Created
The User Experience Indicator will most likely be a combination of 3-5 questions, most likely addressing the following dimensions:

  • Would you recommend this web site to a friend?
  • How satisfied were you with this site?
  • Compared with other web sites, how well did this site meet your expectations?
  • System Usability Scale

A key piece of our organization's strategy is to provide an unexpectedly enjoyable experience for our users—we want them to say, “wow!” Our assertion is that by combining answers to these questions, we will be able to elicit a reliable measure of the overall user experience, including the “wow factor.” If we have hit the mark with the wow factor, responses to all four of these dimensions will be very favorable.

Appropriate Use
These metrics are best associated with the overall experience of a segment of users (e.g., member, brokers, etc.), with a site already live, rather than for an individual feature or a site in development.

The User Experience Team will create a simple formula for combining these 4 measurements into a single User Experience Indicator and will make it easy for product teams to produce a User Experience Indicator for their product. We hope to have this methodology ready in Q2 2008.

Background & Rationale
The User Experience Indicator is a very high-level metric. It measures our success at providing a great user experience, but it is not intended to tell us why a user’s experience was good or bad. We have a whole collection of tools available to dig deeper into the “why” questions. This metric gives us one simple indicator of how an audience’s experience is improving over time and in relation to other audience segments. Here’s the significance of each dimension of the Indicator:

Would you recommend this web site to a friend?
This is based on the work of
Frederick F. Reichheld, as originally published in the Harvard Business Review piece entitled The One Number You Need to Grow. It has since been adopted by a wide range of industries and is gathering steam as a standard indicator of business success. This is the basis of the “Net Promoter” discipline. Here’s a brief synopsis of the original article:

Companies spend lots of time and money on complex tools to assess customer
satisfaction. But they're measuring the wrong thing. The best predictor of
top-line growth can usually be captured in a single survey question: Would you
recommend this company to a friend? This finding is based on two years of
research in which a variety of survey questions were tested by linking the
responses with actual customer behavior--purchasing patterns and referrals--and
ultimately with company growth. Surprisingly, the most effective question wasn't
about customer satisfaction or even loyalty per se. In most of the industries
studied, the percentage of customers enthusiastic enough about a company to
refer it to a friend or colleague directly correlated with growth rates among
competitors. Willingness to talk up a company or product to friends, family, and
colleagues is one of the best indicators of loyalty because of the customer's
sacrifice in making the recommendation. When customers act as references, they
do more than indicate they've received good economic value from a company; they
put their own reputations on the line. The findings point to a new, simpler
approach to customer research, one directly linked to a company's results.

We hypothesize that a user with a “wow” experience is more likely to say they would recommend the site to a friend.

How satisfied were you with this site?
While the “would you recommend” question has many proponents, there are also those who argue it does not adequately address overall satisfaction. E.g., maybe I would recommend this site because it’s the only place in the world I can buy a particular product, even though my experience of the web site itself is horrible. Or maybe I love the site, but I wouldn't recommend it to a friend because it's not relevant to any of my friends (or colleagues). Because of these limitations, we add a basic satisfaction question.

A key limitation to this kind of generic satisfaction question is that it doesn't help us understand why they're satisfied or dissatisfied, so I've heard concerns that a satisfaction score isn't useful. However, in this instance, we're using the score as part of a measure of success, rather than as formative research. If my satisfaction scores are not high enough, I am then incented to figure out why my users are dissatisfied and what I can do about it. We have lots of other tools available to help with those tasks.

The User Experience Indicator will be particularly sensitive to the extremes on this scale. Since our goal is to produce an unexpectedly enjoyable experience, we don’t want people to be only somewhat satisfied, we want them to be thrilled (wow!). So we will initially aim for a threshold of “satisfied or extremely satisfied,” but we will quickly move to a target of changing users from “satisfied” to “extremely satisfied.” For example, in a November 2007 survey of registered members of kp.org, 86% said that they were either satisfied or very satisfied with kp.org. This looks very good and would be a very difficult number to improve. But a closer look at the data shows that this 86% is a combination of 49% satisfied and 37% very satisfied. A meaningful User Experience Indicator would incent teams to increase the percentage reporting “very satisfied.”

Compared with other web sites, how well did this site meet your expectations?
This dimension adds two elements not covered by the previous two:

  • Comparison to other web sites (across industries). These are a critical to the context in which users access our web sites. It’s important to go across multiple industries, because their perceptions are ultimately based on their experiences with their favorite sites (shopping, banking, blogging, etc.), rather than only with our direct competitors.
  • Experience in relation to expectations. Our strategy is to provide an “unexpectedly enjoyable” experience, so we want to find out how well we did relative to what they expected. As users’ expectations increase, we will need to continue innovating to stay ahead of their expectations. This dimension helps us understand the “unexpected” part of the “wow” factor.

System Usability Scale (SUS)
See above for an introduction to this metric. When included in the User Experience Indicator, the SUS score applies to an entire web presence for a given audience, rather than to an increment of functionality. It is measured in a production environment.

The User Experience Team intends to provide a framework that makes it easy to regularly measure the SUS for each major audience segment.

Next Steps

To use metrics effectively, and organization needs to do 4 things:

  1. Define metrics that measure what's important
  2. Make it easy to measure these metrics
  3. Make these metrics widely visible
  4. Formally incent staff to meet targets for these metrics

This post is an initial crack at #1 and #4. The User Experience Team will create the User Experience Indicator methodology. As we all collect and compare the resulting data, we will analyze it for validity and will refine the methodology over time.

How does this sound to you? Please comment.

Thursday, March 6, 2008

Stages of Acceptance of User-centered Design

Here's a draft framework for thinking about how people move from not appreciating the importance of user experience to a place where they build it into everything they do.

The idea here is that if you want to create a user-centered product, you need to create a user-centered culture. And in order to create a user-centered culture, you need to move individuals through these stages of understanding. The type of training and influence required for an individual depends on which stage they're in.

Here's a hypothesis for the stages of "getting it" that people need to go through.

  1. You are not your user
  2. Understanding users requires direct contact with them
  3. Knowledge about users must be grounded in real data and must be actionable
  4. Business results depend on satisfying users
  5. Every decision should be influenced by its implications for the user experience
  6. Specialists can really help, but everyone is responsible for the user experience

What's your experience in helping people through stages? What part of this rings true, and what parts need to be removed or revised?

Tuesday, February 12, 2008

Training

One of the most powerful roles the small core UX team can play is that of trainer. The team will need to, over time, implement a comprehensive training program that's repeatable enough to be efficient and flexible enough to meet the needs of each team. Here are some early thoughts about the training program:

Run through everybody in the following rough sequence:

  1. Surrogate evangelists (initial development team, key sponsors, product owners, )
  2. People who need specific skills (scrummasters and business analysts)
  3. Everybody else, one team at a time

Early on, pick one or two development teams and do a deep UX initiation with the whole team. The best team for this would be one that is highly respected by other teams that that's already very open to UX values. This team becomes a key evangelist and thought-partner in creating future trainings and in evolving the model.

Next, bring key sponsors on board. These are people in our web organization who manage the overall business portfolios--they manage the overall demand coming in from customers, set the high level priorities, oversee product management, and they usually supervise the product owners and SMEs. Getting these folks on board with the overall UX movement will grease a lot of wheels, help me get a good budget, and prime the pump with everyone else. (After a year of informal evangelism targeted at these folks, along with a great directive from our director, these folks basically understand the importance of user experience.)

I want to do one formal session with key sponsors to give them three things:

  • Effective language and approaches to use with their stakeholders and their teams. We've learned a lot about how to evangelize UX, and I want these key managers to be as effective as possible.
  • Introduce them to the base tool set, so they have some background when their staff start talking to them about personas, card sorts, IA, etc.
  • Give them specific assignments. I haven't worked this out yet, but I want to help them channel their support for UX. Some possibilities are...
  • Each manager should incorporate some standard language into the formal objectives of themselves and their staff, to give everyone financial incentives to address UX issues. We can give them some boilerplate language as a starting point.
  • Each manager should ask some specific questions during weekly demos to help the team stay focused on UX, and to help the managers maintain a good sense of where each team is at in this regard. We can help them formulate these questions.
  • Each manager should let their staff know that they are encouraged, and in some cases required, to attend UX-related training. This gives staff permission to take an hour or two (or a day or two) away from their deadlines to do UX-related training.

Next, Product Owners:

  • Evangelize them on user experience
  • Introduce them to the core toolset and the support available
  • Think together about how to build UX into the product backlog and sprint exit criteria
  • Work with them on an ongoing shared product backlog for things like creating and implementing design patterns or templates across the site

Next (or at the same time), Scrummasters:

  • Start with an introduction to the kinds of help we have to offer and how to recognize when they need that help.
  • Then transition the "training session" to a collaborative working session to come up with some shared best practices and places to innovate, using as a starting point the "UX-in-the-product lifecycle" model (I'll blog on this soon).

After Product Owners & Scrummasters, hit the Business Analysts:

  • Evangelize--give them something to live for to replace the requirements documents that have dominated their work lives for the last 3 years
  • Introduce them to the core toolset and the support available
  • Give them the hands-on skills to do things like user-centered stories, effective use of personas, and card sorts.

By this time we'll have a pretty decent skeleton of a support system in place, and three people on each team who can tangibly make use of general excitement about the user experience, we then go from team to team showing them:

  • how valuable UX is
  • how easy we're making it for them to be successful
  • how to know when they need help

Sound like a lot of work just on training? Yes, but less work than it would be to embed a UX specialist in every team, and in the long run everything will be easier if the development teams are convinced they need user-centered design in order to produce optimal results.

I'm guessing that the core UX team could create the bulk of the training curriculae in one or two short (2-week) sprints. Using 3 or 4 people from the core UX team as trainers/evangelists, we could blow through this program pretty quickly, simultaneously creating demand for our services; enabling self-service; and establishing some shared agreements for how we'll all work together.

What do you think? Is this crazy? Way too much overhead for agile? Or is it a sensible way to leverage a small UX team across many development teams? We'll give it a try and find out. In the mean time, what's your advice?

Thursday, February 7, 2008

Core User Experience Team (part 2 of 7)

When trying to leverage a handful of UX specialists across a large number of agile develoment teams, the first question is, should we just divvy up the user experience (UX) specialists across all the teams? I say no. That would give each UX specialist 5 teams to support, and would more or less require each UX specialist to be good at the whole range of UX tools, from IA to interaction design to user research. I think a 1:5 ratio is too small for this to work. So I'm starting on the premise of a core team that provides support as a team.

Who's on the core UX team?
This team will include...
  • UX specialists
  • Creative specialist
  • Page developer
  • Product Owner
  • Scrummaster
  • Business Analyst
  • SME who understands the web sites built so far

We start with 2 people with formal training and significant experience doing things like IA, usability testing, interaction design, etc. We've gotten approval to hire two more.

We'll start with one creative specialist (graphic design of UI). He's formally part of a small creative group, so he can call in help as needed, and he'll have help from the creative group in terms of staying on brand, etc.

We'll start with one page developer who will do HTML, javascript, CSS, Ajax, etc. In our case, the page developer(s) will need to really understand how to work with WebSphere Portal (themes, skins, page layouts, etc.), since that will be the presentation layer for much of what we do.

This core UX team will operate as a scrum. Just like any other scrum, they'll have a product backlog (more on that below), sprints, releases, daily scrums, etc. The key difference between this core UX team and the more traditional scrums (are we allowed to call scrums "traditional" yet?) is the nature of the product and the backlog.



A new spin on the product backlog

In most development teams, the product is a functioning collection of software that meets specific non-functional requirements and produces business value. A roadmap might typically include a list of major batches of features, with some infrastructure along the way. And the typical product backlog would be specific bits of functionality that can be coded.

The core UX team I'm proposing will still have a roadmap and a backlog, but the product might be described as "user centered design across the enterprise." An initial product roadmap might include things like...

  • Training program
  • Repeatable process for usability testing
  • High-level IA and wireframes
  • Design pattern library
  • Model for financing work requests from development teams
  • Reusable navigation widgets

The product backlog would include relatively tangible things, like page templates or reusable navigation widgets, but it will also include less tangible "services," like training programs and user research. I could imagine the team focusing a release on an initial training program. In the first sprint of the release, the team might produce all the materials required to train one development team in the art and science of personas and scenarios. Just like "potentially shippable software" of a traditional scrum, the Product Owner could decide whether to go ahead and conduct that training, or to wait for the next sprint to produce training materials that enable a business analyst to conduct card sorts.

Sprints that don't produce code

The team would figure out how to maximize the use of all team members during the sprint--if the creative person isn't needed for the card sort activities, he or she could spend that sprint preparing reusable CSS and background graphics, for which a training could be quickly developed in the following sprint.

We'll have a challenge figuring out how to maintain a unifying theme for each sprint, when not all activities involve graphic design or page development, but this sounds similar to a typical agile team when they're in a sprint focusing on something like upgrading the hardware infrastructure. Team members will go in seemingly unrelated directions, but they'll understand amongst themselves how it will all come together.

Ideally, the team would prioritize repeatable processes that could be used immediately by specific development teams, and would create just enough organizational infrastructure to support the immediate needs (e.g., initially just a sign-up sheet for training; a later release might include a recharge mechanism and a schedule for giving the core training to 200 people.)

These are just some quick examples. The point is that the product backlog contains both software and services, both of which are "potentially shippable" at the end of each sprint. A release could consist not only of software, but of a repeatable process with the infrastructure in place to support it.

What do you think? Does this sound like an agile team? Will it work? What am I leaving out?

Next post will be a draft approach to UX training throughout the enterprise.


Wednesday, February 6, 2008

Leveraging a small user experience team (1 of 7)

We have a small handful of user experience (UX) specialists, and we're ramping up our agile development teams. How can 4 UX specialists support 20 teams when the 20 teams are dedicated, co-located, moving on their own schedules, constantly changing direction, and not necessarily on board with the importance of user experience?

We had a good all-day session today mapping out how we want to start approaching this. The central theme is that the small team of UX specialists will operate as an agile product team. But unlike a typical agile development team dedicated to producing executable code, our product will be user-centered design. Each sprint will produce "shippable user-centered design." Sometimes this will look like software, sometimes it will look more like a service. The result will be a large number of teams producing software designed to produce a great user experience.

Here are the key ideas so far...

We are an agile team, and user-centered design is our product.

  • We form a small core team that uses standard agile methodologies in order to provide user-centered design services and products to all the development teams.

We empower the development teams to do user-centered design.

  • We put a lot of energy into training and UX evangelism.
  • We make it super-easy for the develpment teams to get face-time with end-users throughout their work.
  • We inject the core UX team into development teams at key leverage points in the product lifecyle.

We do hands-on user-centered design.

  • We create reusable artifacts that have good UX principles built in.
  • We provide ad hoc consulting and design services to the application teams.

I'll flesh out each of these bullets in subsequent posts. How does this look as a starting point?




Tuesday, February 5, 2008

User Research as a Commodity (part 3 of 7)

The Problem
I’d like to describe an approach we’ve using at Kaiser to make it easier for development teams to incorporate user insights into their work. Just about every development team on the planet could benefit from more user research—usability testing, card sorts, label tests, brand reactions, cognitive interviews, etc. The more exposure teams get to their end users, the more user-centered their work will be, and the better the user experience (UX).

In my experience, there are two main reasons why teams don’t do more user research:
  1. They don’t understand its importance
  2. It’s too hard. You have to schedule it into the project plan, find a place to do it, prepare stimuli, recruit participants, deal with incentives, figure out what to test, and then spend time actually testing.


There’s a cool relationship between these two barriers to research: If we can make it super-easy for teams to do the research, they’re more likely to actually do it, and once a team does a little user research, they usually understand its importance and want more. The key is to prime this cycle.

So how do we get this cycle going? How do we make it incredibly easy? How do we change user research from a hassle that interrupts the work to a commodity that can be easily acquired on-demand?

The Challenge
Last year we commissioned an agile-like team to build a new web site for brokers--the professionals who help employers select and purchase health plans for their employees. The team needed to produce a beta in two months and a fully operational site in four months, and they needed to do it on an entirely new technology platform.

The pressure was on. At that time, our typical waterfall timeline for even simple projects was over a year. We typically did a week or two of usability testing in conjunction with the requirements phase, before development, or even technical design, started.

We had already done some ethnography with brokers, and the Product Manager was totally convinced of the need to incorporate usability testing, etc. into the design & development work. But many others, including developers, the project manager, and sponsors, thought of user research as a nice-to-have that was likely to blow the tight schedule.

Everybody was excited about moving to agile, but we didn’t have a clue how we were going to fit two weeks of usability testing into the work. Should we do it mid-way through when we’d have good comps to show users? But we couldn’t afford that kind of a break in the project plan. Or maybe front-load the testing, because the developers needed to take several weeks early on to get their environments in order? But we didn’t yet have any idea what the new technology would allow in terms of the UI. Complicating matters, our usability specialists were in Pasadena, CA, while the agile-like team was in Pleasanton, CA—350 miles away. Not only that, but brokers are difficult participants to recruit—they work in offices all over the country and they have very busy schedules, so the team couldn’t just take a paper prototype down to the nearest Starbucks and ask them what they think (as we’ve done with some other audiences).

When asked, the sponsors and most of the team thought the best thing to do would be to crank out an initial site on a tight timeline, doing the best we could based on intuition and a few heuristics, and then test it with users after we went live. We were going agile, they figured, so we can change it easily after we’re live. After all—it’s only UI. ;-)

Clearly, the UX people didn’t understand agile, and the software people didn’t understand UX.

The Solution: Testing as a commodity
The Product Manager and I were convinced that we needed to expose our work to end users prior to going live, and we could see there was no way we’d get even a week out of the schedule. So we decided to try something new—prescheduled testing. Here’s how it worked:

Every other Thursday morning, four or five brokers (our target audience) would show up at our offices in Pasadena. Our user research specialist would work with each participant for about an hour. While this testing took place in Pasadena, we piped audio and video of the testing up north to Pleasanton so the team could watch in real-time and IM questions and comments to the moderator in Pasadena.

Since we knew the testing schedule several weeks, and even months, in advance, we were able to easily schedule a room for the testing. One of our admin staff, who’s particularly good on the phone, took on recruitment. With multiple dates prescheduled, recruitment was easier—“OK, you can’t make it next Thursday; how about two weeks later? How about the Thursday of the following month?” We also had an admin person manage all the incentive checks, greet the participants in the lobby, and help with set-up and tear-down, all of which was essentially the same each week.

In the early stages testing consisted mostly of card sorts and cognitive interviews. As the project progressed, we moved to various stages of UI, focusing on whatever the team had just built or was about to build—one week working on a page layout; the next week focusing on a particular widget.

Results
The key breakthrough for us was that we made it super-easy for the agile team to get the benefit of user research. They didn’t have to stop what they were doing, they didn’t have to deal with logistics, and the only planning they needed to do was to be sure they had something to show users by Thursday morning, along with some good questions to ask.

It worked great. The team made constant course corrections based on the research. We would typically end sessions by asking the participants, “on a scale of 1-5, how easy was this site to use? (5 is easiest)” In two months of iteration we moved this from a 2.5 to a 4.5. The team could come up with ideas and never have to wait more than two weeks to test the ideas with users.
Sponsors could tune in to view the testing whenever they wanted, and they were delighted to see their customers, the brokers, delighted. The personas became real people. We virtually eliminated arguments within the team about what would work best for users. Instead of pressing the point, the Product Manager could just say, “I’ll ask them this Thursday.”

Why it worked
I can’t stress enough the importance of two key elements:

  1. A regularly scheduled research time, scheduled up to months in advance.
  2. Logistical support from outside of the project team

Scaling

So the pilot was successful. Everybody loves the initial site. Everybody wants to go agile. That means we’ll soon be looking at up to 20 agile teams operating simultaneously, working on multiple sites that support several different audience segments (brokers, Kaiser members, employers, etc.). Can we scale this approach to work in that environment? Here are my thoughts so far…

Brokers are now actively involved in a beta program, and we still have every-other-Thursday prescheduled research available to them as a tool. As we start up an agile team for employer groups, I think we can use a very similar model. The challenge will be with Kaiser members, because we’re likely to have many agile team running simultaneously, focusing on different content and functionality, and attending to different subsegments of the member audience. It sound pretty unwieldy to give each team its own half-day slot every other week.

What if we instead treat the user research environment and participants as a commodity available to all the teams? Every Monday and Wednesday we bring in members all day long. Every Tuesday we bring in employers. Every Thursday we bring in brokers. We post a schedule, and individual agile teams can sign up for time.

Not every agile team will have an hour’s worth of user research tasks for participants each week, but the commodity approach helps out again: We can bring in 8 participants in one day and spend an hour with each. During that hour, we may do a simple 5 minute “find this content” task for Team A, a longer “complete this transaction” task for Team B, and a broad “see if we just broke the UI” task for Team C.

What if teams don’t generate enough tasks to fill up the time this week? No problem--our “Platform UI Team” will maintain a backlog of non-urgent areas to test with each audience, so they can fill in as needed. What if Team D has some questions, but next Monday’s schedule is already filled by other teams? No problem--we’ve got open time on Wednesday’s schedule.
With the recruitment and logistics down to a routine managed by admin staff, and the bulk of the testing budget managed as a shared service (and thus “free” to the teams), this leaves the user research specialists and Product Owners free to formulate good tasks and questions and to apply the results of the research.

Back to metrics
Here’s one more possible extension—we haven’t tried it yet, but we’re toying around with the idea.

Kaiser Permanente as a whole is struggling to become a more metrics-driven organization. Our physicians are internationally recognized for how well they practice evidence-based care, but our business practices haven’t yet caught up. Over the next months and years, our performance will be increasingly judged on metrics, and our agile teams will be measured and incented based on metrics like:

  • Health outcomes
  • Sales
  • Operational efficiencies
  • Time-to-market
  • Cost
  • Backlog burndown

Along with these “bottom line” metrics, we’ll also pay a lot of attention to leading indicators—the metrics that show early on whether we’re moving toward improvements in the bottom line metrics. As we become a more metrics-driven organization, how can we ensure that people pay attention to user experience?

Those who already “get it” will know that the best way to achieve great business results is to provide a fabulous user experience designed around the needs and perspectives of the end-users. But for those who don’t yet get it, time-to-market, cost efficiencies, and even sales can seem to be more important than user experience. How can we use the metrics system to incent people to both perform good user research and to use that research to improve the user experience?

SUS Tracking as a Commodity
A while back we had some significant availability issues with one of our sites—too many unplanned outages. A one-page dashboard of metrics was arguably the most powerful influence for fixing the problems. Suddenly, people at every level of the organization could easily see, on a weekly basis, how many times it experienced slowness, how many minutes it was down for planned outages and how many minutes it was down for unplanned outages. The numbers were bad. The numbers were visible. The numbers got executives and team members to come together to make those numbers improve dramatically.


What gets measured gets managed.


What if we published a similar metric for user experience? What if teams were incented to “bring your user experience metric up a point?” What if teams could see that their recent release moved their score up (or down), and what if all of our peers could see our user experience scores? How could we do that without wasting a whole bunch of time and getting in everybody’s way?

What if each site (or product) had a regularly scheduled UX Checkup? I’m thinking maybe every 2-3 months. This would be part of the user research schedule. The user research specialists could work with Product Owners to create a script . Then every couple of months, without the product team needing to lift a finger, we would take participants to the site in the production environment and run through the script.

  • Finally fixed that annoying bug? Score goes up.
  • Haven’t yet implemented the feature the users most want? Score goes stays the same.
  • Rushed to get a new feature in on time and screwed up the IA in the process? Score goes down.
My incentive pay in 2007 was based in part on an objective of brokers scoring the site at least ‘4’ on a 1-5 usability scale. Could we possible scale this so that everyone is incented to provide a measurably excellent user experience? There are two fairly obvious candidates for this metric:

SUS deals with usability quite nicely, but leaves out other critical aspects of the user experience, whereas satisfaction questions are notoriously unfocused.

The two might work well In combination, with the SUS score assessed in-person and the satisfaction question routinely asked via survey.

Inspirational Conclusion

Creating a fabulous user experience is all about a user-centered culture with access to the tools of user-centered design, combined with the ability to deliver software and operations support. The biggest lever we have for creating a user-centered culture is exposing everyone involved to their end-users in valuable ways. If we can do that, the users will thrive, and so will the business.

One way to promote exposure to end-users is to remove the main barriers to basic user research by offering pre-scheduled user research that is easy, scalable, and measurable.

Please comment!