Cumulative cost of a few seconds

Currently, I’m on a project team that’s designing, building, and implementing call-centre software. You can probably imagine the call-centre experience from the customer side—we’ve all had our share of call-centre experiences. I’ve been looking at call centres from the other side—from the perspective of the customer-service agents and their employer.

I started by observing customer-service agents on the job. At the site I visited, the agents were using a command-line system, and the agents typed so fast that I couldn’t make sense of their on-screen actions. I signed up for several weeks of training to become a novice customer-service agent. This allowed me to make sense of my second round of observations, and appreciate how efficiently the agents handle their customer calls. It also helped me to identify tasks where design might improve user performance.

Wrap-up choicesFor example, after each call the agent decides why the customer called, and then, by scanning lists of main reasons and detailed reasons, “wraps up” the call, as illustrated. I measured the time on task; the average wrap-up task is nine seconds in duration.

It’s only nine seconds

Nine seconds may not seem long, but let’s make a few (fictitious but reasonable) assumptions, and then do a little math.

If the average call-handling time is five minutes, or 300 seconds, the 9 seconds spent on call wrap-up is 3% of the total handling time. A full-time agent could spend 202,500 seconds—that’s 56¼ hours per year—on call wrap-ups, assuming a 7½-hour workweek and no lulls in incoming calls. Since call volumes vary, there will be times when call volumes are too low to keep all agents taking calls. The customer-service agents have other tasks to complete during such lulls, but if we assume this happens about a third of the time, we need to round down the 56¼ hours accordingly. Let’s choose a convenient number: 40 hours, or one workweek per agent per year.

One workweek is 2% of the year.

Based on this number, a redesigned call wrap-up that takes only half the time would save one percent of the labour. Eliminating the wrap-up entirely would save two percent. That frees a lot of hours for other tasks.

A similar calculation on the cost side (n hours to design and implement changes) leaves us with a simple subtraction. Projected saving minus cost is the return on investment, or ROI. Comparing that number to similar numbers from other projects that we could tackle instead—the opportunity costs—makes it easy to decide which design problem to tackle.

Leaner, more agile

This week, I’m attending a few days of training in agile software development, in an Innovel course titled Lean, Agile and Scrum for Project Managers and IT Leadership.

My first exposure to agile was in Desiree Sy‘s 2005 presentation, Strategy and Tactics for Agile Design: A design case study, to the Usability Professionals Association (UPA) annual conference in Montreal, Canada. It was a popular presentation then, and UPA-conference attendees continue to be interested in agile methods now. This year, at the UPA conference in Portland, USA, a roomful of usability analysts and user-experience practitioners discussed the challenges that agile methods present to their practice. One of the panellists told the room: “Agile is a response to the classic development problem: delivering the wrong product, too late.” There was lots of uncomfortable laugher at this. Then came the second, thought-provoking sentence: “Agile shines a light on the rest of us, since we are now on the critical path.” Wow! So it’s no longer developers, but designers, usability analysts, etc, who are holding up the schedule?

An agile loadDuring this week’s training, I’m learning lots while looking for one thing in particular: how to ensure agile methods accommodate non-developer activities, from market-facing product management activities, to generative product design, to early prototype testing, to usability testing, and so on.

I’m starting to suspect that when agile methods “don’t work” for non-developers, it’s because the process is wagging the dog (or that its “rules” are being applied dogmatically). I think I’m hearing that agile isn’t a set of fixed rules—so not a religion—but a sensible and flexible method that team members can adapt to their specific project and product.

Design requires courage and trust, not just user involvement

Designing is usually a rewarding activity, but the path from start to finish can be filled with frustration and even panic. I’ve seen design processes work—and come to the realisation that “My own designs benefited from rapid iteration!”

The benefit of designThese humbling experiences helped me learn to trust the process, even in the face of frustration or panic. It’s these experiences that give me the courage to follow the design process, even when it isn’t clear how to resolve the tension between conflicting design constraints.

In the face of an unknown, individuals and especially teams tend to turn to knowns. If needed, they’ll manufacture the known data, by deferring the choice to users. Here’s part of what Larry Constantin wrote about courage in software design, in a paper that advocates for user involvement at the right time:

Most damning and least recognized among the limitations of user-centered design is the way it subtly discourages courage. Courage is one of the central tenets of extreme programming and agile development methods. […] User-centered design makes it too easy for designers to abdicate responsibility in deference to user preference, user opinion, and user bias. In truth, it is hard to stick with something you know works when users are screwing up their faces at it. What if you are wrong? What if you are not as good a designer as you thought you were? It takes real courage and conviction to stand up for an innovative design in the face of users who complain that it is not what they expected or who want it to work just like some other software or who object to certain sorts of features as a matter of course. It takes responsible judgment to know when to listen to users and when to ignore them.

In the many design sessions I have facilitated, three times I’ve seen that lack of courage expressed by a participant. Each time, it sounded like a mix of panic and frustration:

The solution has been on the wall since the first round!

The design sessions I facilitate ask participants to saturate the design space with lots of ideas. They each bring five sketches—five substantially different ideas—and then, after sharing their ideas with the other participants, they rapidly iterate the first 15 or 20 sketches to develop even more. All this takes place before any analysis.

When the goal is to saturate the design space—to identify as many solutions as possible in a short time—there’s more to influence the design once the analysis begins. Inevitably, the design that the team decides on was not already on the wall. Motivated design participants quickly learn this, and—in most cases—become advocates of the process.

For most development teams, the Five Sketches™ process I introduce is a departure from the status quo, so it takes courage for their team members to take a stand, to say “I will use this process” for design problems that need it.

How to test earlier

Involving users throughout the software-development cycle is touted as a way to ensure project success. Does usability testing count as user contact? You bet! But since most companies test their products later in the process, when it’s difficult to react meaningfully to the user feedback, here are two ways to get your testing done sooner.

Prioritise. Help the Development team rank the importance of the individual programming tasks, and then schedule the important tasks to complete early.

  • Prioritise and schedule the tasksIf a feature must be present in order to have meaningful interaction, then develop it sooner.
  • For example, email software that doesn’t let you compose the message is meaningless. To get meaningful feedback from users, they need to be able to type an e-mail.

    Developers often want to start with the technologically risky tasks. Addressing that risk early is good, but it must be balanced against the risk of a product that’s less usable or unusable.

  • If a feature need not be present or need not be working fully in order to have meaningful interaction, then provide hard-coded actions in the interim, and add those features later.
  • For example, if the email software lets users change the message priority from Standard to Important, hard-code it for the usability test so the priority is always Standard.

  • If a less meaningful feature must to be tested because of its importance to the business strategy, then develop it sooner.
  • For example, email software that lets users record a video may be strategically important for the company, though users aren’t expected to adopt it widely until most laptops ship with built-in cameras.

Schedule. For each feature to be tested, get the Development team to allocate time to respond to usability recommendations, and then ensure this time is neither reallocated to problem tasks, nor used up during the initial development effort of the to-be-tested features. Engage the developers by:

  • Sharing the scenarios in advance.
  • Updating them on your efforts to recruit usability-study participants.
  • After developers incorporate your recommendations, retesting and then reporting improvements in user performance.

Development planning that prioritises programming tasks based on the need to test, and then allows time in the schedule to respond to recommendations, is more likely to result in usable, successful products.

Which user involvement works

User-centred design (UCD) advocated involving users in the design process. Have you wondered what form that user involvement could take, and which forms lead to the most successful outcomes?

I recently came across data that Mark Keil published a while ago. He surveyed software companies and correlated project outcomes with the type of user access that designers and developers had.

Type of contact with users Effectiveness
  For custom software projects
  Facilitate teams, hold structured workshop with users, or use joint-application development (JAD). ██████████
  Expose users to a UI prototype or early version to uncover any UI issues. ██████
  Expose users to a prototype or early version to discover the system requirements. ████
  Hold one-on-one semi-structured or open-ended interviews with users. ████
  Test the product internally (acceptance testing rather than QA testing for bugs) to uncover new requirements. ██
  Use an intermediary to define user goals and needs, and to convey them to designers and developers. ██
  Collect user questions, requirements, and problems indirectly, by e-mail or online locations.
  For packaged or mass-market software projects
  Listen to live/synchronous phone support, tech-support, or help-desk calls. ████████
  Hold one-on-one semi-structured or open-ended interviews with users. ██████
  Expose users to a UI prototype or early version to uncover any UI issues. ████
  Convene a group of users, from time to time, to discuss usage and improvements. ████
  Expose users to a prototype or early version to discover the system requirements. ██
  Test the product internally (acceptance testing rather than QA testing for bugs) to uncover new requirements. ██
  Consult marketing and sales people who regularly meet with and listen to customers. ██
  At trade shows, show a mock-up or prototype to users and get their feedback.
Not reported as effective in this 1995 source
  Conduct a (text) survey of a sample of users.
  Conduct a usability test to “tape and measure” users in a formal usability lab. (This study precedes such products as TechSmith Morae.)
  Observe users for an extended period, or conduct ethnographic research.
  Conduct focus groups to discuss the software.

Although Keil’s article includes quantitative data, his samples are small. I opted to show only the relative usefulness of various methods. My descriptions, above, are long because the original article uses 1995 terms that have shifted in meaning. I believe some of the categories now overlap, due to changes in technology and method. For example, getting users to try a prototype of the UI in order to uncover UI issues sounds like the early usability testing that I do with TechSmith Morae, yet the 1995 results gave these activities very different effectiveness ratings.

For details, see the academic article by Mark Keil (Customer-developer links in software development). Educational publishers typically require a fee for access.

These methods also relate to research I’m doing on epistemology of usability analysis.

Rigid UCD methodology fails?

I received an e-mail from someone at the 2008 IA Summit about Jared Spool’s declaration that UCD is dead:

——Forwarded message——
From: P
Date: Sun 13/04/2008, 2:54 PM

Hi Jerome,

I’m at the iA Summit in Miami right now, and hearing about all of the things that are going on makes me think of you. One of the interesting sessions was Jared Spool’s keynote speech. He conducted research into what makes certain companies better able to produce effective designs. He used this model to talk about the various approaches departments do to facilitate design:

Things that facilitate design

He said all design involves a process, whether it’s been formalized or not. Interesting, though not surprising: companies that have dogmatic UCD leadership or use a rigid UCD methodology are unlikely to create anything innovative. To innovate, you want to apply techniques in sometimes surprising ways to solve problems that they were not intended for (those are the “tricks”.)

OK, I’m going back out in the warm (hot!) weather.

– P

Of course, the lack of process doesn’t guarantee innovation, either, nor does it guarantee you’ll be able to repeat your (accidental) successes. I believe a successful design process must involve some form of generative design—as Five Sketches™ does—that’s based on knowledge of user condition. I also beleive that, once you’ve internalised those two things, you can use almost any form of facilitation to design good products.