Cumulative cost of a few seconds

Currently, I’m on a project team that’s designing, building, and implementing call-centre software. You can probably imagine the call-centre experience from the customer side—we’ve all had our share of call-centre experiences. I’ve been looking at call centres from the other side—from the perspective of the customer-service agents and their employer.

I started by observing customer-service agents on the job. At the site I visited, the agents were using a command-line system, and the agents typed so fast that I couldn’t make sense of their on-screen actions. I signed up for several weeks of training to become a novice customer-service agent. This allowed me to make sense of my second round of observations, and appreciate how efficiently the agents handle their customer calls. It also helped me to identify tasks where design might improve user performance.

Wrap-up choicesFor example, after each call the agent decides why the customer called, and then, by scanning lists of main reasons and detailed reasons, “wraps up” the call, as illustrated. I measured the time on task; the average wrap-up task is nine seconds in duration.

It’s only nine seconds

Nine seconds may not seem long, but let’s make a few (fictitious but reasonable) assumptions, and then do a little math.

If the average call-handling time is five minutes, or 300 seconds, the 9 seconds spent on call wrap-up is 3% of the total handling time. A full-time agent could spend 202,500 seconds—that’s 56¼ hours per year—on call wrap-ups, assuming a 7½-hour workweek and no lulls in incoming calls. Since call volumes vary, there will be times when call volumes are too low to keep all agents taking calls. The customer-service agents have other tasks to complete during such lulls, but if we assume this happens about a third of the time, we need to round down the 56¼ hours accordingly. Let’s choose a convenient number: 40 hours, or one workweek per agent per year.

One workweek is 2% of the year.

Based on this number, a redesigned call wrap-up that takes only half the time would save one percent of the labour. Eliminating the wrap-up entirely would save two percent. That frees a lot of hours for other tasks.

A similar calculation on the cost side (n hours to design and implement changes) leaves us with a simple subtraction. Projected saving minus cost is the return on investment, or ROI. Comparing that number to similar numbers from other projects that we could tackle instead—the opportunity costs—makes it easy to decide which design problem to tackle.

If the user can’t use it, it’s broken

A few days ago, I tried to pump up my bicycle tires. I had to borrow a pump.

Bike-tire pumpThe connectors and attachments suggested this pump would fill North-American and European tire tubes as well as air mattresses, soccer balls, and basketballs.

But the thing is, neither the pump’s owner nor I were able to make it work. We couldn’t pump up my bike tires.

Was it me? Was it the pump’s owner? Or was it the pump’s design?

If the user can’t use it, it’s broken (…or it may as well be).

Natural mapping of light switches

I recently moved into a home where the light switches are all wrong. I was able to fix one problem, and the rest is a daily reminder that usability doesn’t just happen by itself.

In one pair of light switches, the left switch controlled a lamp to the right, and the right switch controlled a lamp to the left. The previous resident’s solution to this poor mapping was to put a red dot on one of the switches, presumably as a reminder. I put up with that for about 3 days, and then it was time to fix the mapping.

Swapping light switches

Now, the left switch is for the lamp on the left, and the right switch is for the lamp on the right. That’s natural mapping.

If you want to read more about natural mapping, check out this blog about interaction design and usability. It presents a classic natural-mapping problem: on a kitchen stove, which dial controls which burner?

Meanwhile, at my home there are other problems with light switches, but they aren’t about mapping. In one case, the light switch is far from the door, so at night I must cross a dark room to reach the switch. In another case, the light over the stairs is controlled by two switches that are improperly wired, so both switches must be in the “on” position. If you guessed that one switch is upstairs and the other downstairs, you’re correct. To light the stairs, often I must run up or down the dark staircase to flip the switch.

All this is both amusing and irritating and, as I already said, a daily reminder that usability doesn’t just happen. To get it right, usability takes planning and attention during implementation.

A banister has multiple user groups

We don’t always know what a design is intended to convey. We don’t always recognise or relate to a design’s intended user groups. But we don’t have to know everything that an object’s design is intended to do, in order to make effective use of the object.

I imagine the metal inserts in the wooden banister (see the video, above) are detectable warnings for people who are visually impaired, but that’s only a guess. If you watch the video again, you’ll see that the metal inserts do not occur at every bend in the staircase.

Whatever the intent, the banister fully met my needs.

Durable design: still possible?

A simple and good design can last and last. Consider the qualities of a BC Telephones operator’s chair from the 1930s:

Telephone operators (ca. 1932)

Environmentally defensible. It is made primarily of a renewable resource—wood—and is so durable that, after decades, it still withstands daily use.

Functional. Originally, at BC Tel, this chair fit a small space, swivelled so the operator could get in and out of a small workspace, and provided a place for the operator’s personal items. After it was decommissioned, this compact and strong chair continued to be functional in other settings.

Aesthetically appealing. I’m thinking of the wood, the form, and the chair’s history. This chair has only marginally been repurposed, because it still seats people as they connect to a telco service—formerly a telephone, now Internet access.

Can we still design objects that last as long as this chair has?

Prioritising your web-design work

When you have limited resources, how do you prioritise what to provide on an e-commerce website? In the PEW Internet and America Life report, Generations on-line 2009, Jones and Fox present data in a format useful to help you prioritise.

To answer these sample questions, consider the colour coding in the data, below:

  • Should you do any search-engine optimisation (SEO) on your site? What kinds of things will users search for?
  • Should you provide video on your site? What kinds of users will watch video?
  • Should you help site visitors research your products? Should you deliver that information in a podcast?

An excerpt of the Generations On-Line 2009 data

Below are common online activities, by age group. The more people report doing an activity, the higher it appears in the list.

The boxes ( █ = 5% ) show the portion of the online population in each age group (total = 100% of online USA adults).‡

Ages 18-32 Ages 33-44 Ages 45-54 Ages 55-63 Ages 64-72 Ages 73+
█ █ █ █ █ █ █ █ █ █ ▀ █ █ █ █ ▀ █ █ ▀ █ ▀
E-mail.
Search.
Research a product.
Get news.
Watch video.
Buy something.
Get health info.
Visit SNS*.
Make travel reservation.
Get job info.
Create SNS* profile.
Instant messaging.
Download music.
Banking.
Visit gov’t site.
Research for job.
Play games.
E-mail.
Search.
Research a product.
Get health info.
Buy something.
Get news.
Make travel reservation.
Banking.
Visit gov’t site.
Research for job.
Watch video.
Get job info.
E-mail.
Search.
Research a product.
Get health info.
Get news.
Make travel reservation.
Buy something.
Visit gov’t site.
Research for job.
Banking.
E-mail.
Search.
Get health info.
Research a product.
Buy something.
Get news.
Make travel reservation.
Visit gov’t site.
E-mail.
Search.
Research a product.
Get health info.
Make travel reservation.
Visit gov’t site.
Buy something.
Get news.
E-mail.
Search.
Get health info.
Make travel reservation.
Research a product.
Activities above this line were done by more than half the group, below this line by less than half.
Read blog.
Download video.
Rate product.
Get religious info.
Auction.
Podcast.
Create blog.
Visit virtual world.
Download music.
Instant messaging.
Get religious info.
Play games.
Visit SNS*.
Rate product.
Read blog.
Download video.
Auction.
Create SNS* profile.
Podcast.
Create blog.
Visit virtual world.
Watch video.
Get job info.
Get religious info.
Rate product.
Instant messaging.
Auction.
Read blog.
Play games.
Download music.
Download video.
Visit SNS.*
Podcast.
Create SNS* profile.
Create blog.
Visit virtual world.
Banking.
Research for job.
Get job info.
Watch video.
Rate product.
Get religious info.
Play games.
Auction.
Read blog.
Instant messaging.
Download music.
Download video.
Podcast.
Visit SNS*.
Create SNS* profile.
Visit virtual world.
Banking.
Research for job.
Get religious info.
Rate product.
Play games.
Instant messaging.
Watch video.
Read blog.
Auction.
Download music.
Download video.
Get job info.
Visit SNS*.
Podcast.
Create blog.
Create SNS* profile.
Visit virtual world.
Buy something.
Get news.
Visit gov’t site.
Get religious info.
Banking.
Instant messaging.
Play games.
Rate product.
Read blog.
Watch video.
Get job info.
Podcast.
Research for job.
Auction.
Create blog.
Download music.
Visit SNS*.
Create SNS* profile.
Visit virtual world.

*  Social-networking service.
‡ Data from teens (ages 13-17) isn’t listed here because I had incomplete population data. However, teens do a smaller range of online activities than all adult age groups.

By examining the data, you can quickly see the size of your potential audience and the likelihood that they’ll engage in particular activity. For example, you’ll definitely want to optimise your site to help potential customers find product information on your site, both before they get there (from Google or Bing), and after they get there. This may seem like an obvious statement, but ask yourself: “What three things have I done to improve SEO, or search-engine optimisation, on my site?”

Learning from a poke in the face

During usability testing, I’m always fascinated to see how creatively users misinterpret the team’s design effort. I’ve seen users blame themselves when our design failed, and I’ve seen users yell at the screen because our GUI design was so frustrating.

Wednesday, the tables were turned.

I unintentionally “agreed” to let Facepoke—that social-networking site—invite everyone with whom I’d ever exchanged e-mail. Think about all the people you may have exchanged e-mail with. Former bosses and CEOs. Your kid’s teachers and the principal, too. People you used to date. Prospective business partners, or people you’ve asked for work but who turned you down. Your phone company, car-rental company, bank, and insurance company. Government agencies. The person you just told “I’m too busy to volunteer,” and your teammates from that course in 2005. Your e-mail records are full of people that you simply wouldn’t want on your Facepoke page.

How could I be so stupid?

See paragraph 1:  User blames self for poor design.

Facepoke had been interrupting my flow for several days, offering to help me find Friends by examining my Gmail records.

1.    I gave in, chose three Friends, and clicked Invite.

The screen flashed, but the list was still there.

2.    I clicked Invite again.

Then came the moment of horror: I saw that the list had been changed! Switched! It was now a list of every e-mail address in my Gmail records that was not already associated with a Facepoke account.

With that second click, I had “agreed” to let Facepoke invite everyone with whom I had ever exchanged e-mail. There was no confirmation, no “Invite 300 people? Really?!?”

3.    I sought in vain for a way to Undo.

With each passing minute, I thought of more and more people who would have received this inappropriate inivitation to join me on Facepoke.

FacepokeWhy wasn’t there a confirmation?

See paragraph 1:  User emotes in frustration.

Note to self: Always do better than this

In my usability- and design work, I will continue to ask: “What’s the worst that can happen?” I will promote designs that prevent the worst that can happen. I will not present two apparently identical choices back to back, one of little consequence, one of great consequence. I will allow users to control their account and to Undo or recover from their unintended actions. I will not make users feel like they’ve been misled.

The business case for design: ROI

Peter Merholz of Adaptive Path explained his view that customer experience is an investment, not a cost, in an article this week on Harvard Business Publishing’s site.

I adapted one of the “linking elephants” illustrations in the Merholz article by adding another row of boxes and text to illustrate what Merholz says: it is design that motivates people to modify their behaviour. I also added an ROI or return-on-investment calculation.

merholz-linking-elephants

Design makes the difference. By “design” I don’t just mean how it looks; I’m including the mental model (how the site visitors or users think the site meets their needs), the workflow and interaction (how users complete the task), the experience (including how users feel about using the site), and the prototyping and formative usability testing needed to validate the proposed changes. Your business case needs to include the cost of all this design work.

Making a business case can be intimidating, but the above illustration shows that it’s conceptually easy. In your business case, you predict the benefit to the company of a project, using the best estimates you can come up with. What would be your organisation’s goal? What behavioural change on the part of site visitors moves you toward that goal? What is that behaviour worth, either in revenue or in reduced costs? How many site visits do you get per week? What’s the potential impact to your organization of redesigning part of your site? After you implement the new design, you use summative testing and comparisons to get feedback on the assumptions of your business case.

I recommend reading Merholz’ article on the Harvard Business Publishing.

Low-fi sketching increases user input

Here are three techniques for eliciting more feedback on your designs:

  • show users some alternatives, so more than one design.
  • show users a low-fidelity rather than high-fidelity rendering.
  • ask users to sketch their feedback.

To iterate and improve the design, you need honest feedback.  Let’s look at how and why each of these techniques might work.

Showing alternative designs signals that the design process isn’t finished. If you engage in generative design, you’ll have several designs to show to users. Users are apparently reluctant to critique a completed design, so a clear signal that the process is not yet finished encourages users to voice their views, but only somewhat.

Using a low-fidelity rendering elicits more feedback than the same design in a high-fidelity rendering. Again, users are apparently reluctant to critique something that looks finished—as a high-fidelity rendering does.

hi-fi_vs_low-fi_sketching

The design is the same, but it feels more difficult to criticise the one on the right.

Asking users to sketch their feedback turns out to be the single most important factor in eliciting feedback. It’s not known why, because there hasn’t been sufficient published research, but I hypothesize that it’s because this is the most indirect form of criticism.

Where’s the evidence for sketched feedback?

The evidence is unpublished and anecdotal. The problem with unpublished data is that you must be in the right place at the right time to get it, as I was during the UPA 2007 annual conference when Bill Buxton asked the room for a show of hands. Out of about 1000 attendees, several dozen said they had received more and better design-related feedback by asking users to sketch than by eliciting verbal feedback.

When you ask a user: “Tell me how to make this better,” they shrug. When you hand them a pen and paper and ask: “Sketch for me how to make this better,” users start sketching. They suddenly have lots of ideas.

My own experience agrees with this. In Perth, Australia, I took sketches from a Five Sketches™ design session to a customer site for feedback. I also brought blank paper and pens, and asked for sketches of better ideas.

Not surprisingly, the best approach is to combine all three techniques: show users several low-fidelity designs, and then ask them to sketch ways to make the designs better.

Teamwork reduces design risk

It takes a range of skills to develop a product. Each skill—embodied in the individuals who apply that skill—brings with it a different focus:

  • Product managers talk about features and market needs.
  • Business development  talk about revenue opportunities.
  • Developers talk about functionality.
  • Usability analysts talk about product- and user performance.
  • Interaction designers talk about the user experience.
  • QA talks about quality and defects.
  • Marketing talks about the messaging.
  • Technical communicators talk about information and assistance.
  • You may be thinking of others. (I’m sorry I’ve overlooked them.)

What brings all these together is design.

Proper design begins with information: a problem statement or brief, information about the targeted users and the context of use, the broad-brush business constraints, Design as a teamsome measurable outcomes (requirements, metrics, or goals), access to a subject-matter expert to answer questions about the domain and perhaps to present a competitor analysis.

Design continues by saturating the design space with ideas and, after the space is filled, analysing and iterating the ideas into possible solutions. The design process will raise questions, such as:

  • What is the mental model?
  • What are the use cases?
  • What is the process?

The analysis will trigger multiple rapid iterations of the possible solutions, as the possible solutions are worked toward a single design:

  • Development. What are the technical constraints? Coding costs? Effect on the stability of the existing code? Downstream maintenance costs?
  • Usability and experience. Does the design comply with heuristics? Does it comply with the standards of the company, the industry, and the platform? Does paper-prototyping predict that the designed solution is usable?
  • User experience. Will the designed solution be pleasing due to its quality, value, timeliness, efficiency, innovation, or the five other common measures of customer satisfaction?
  • Project sponsor. Is the design meeting the requirements? Is it reaching its goals or metrics?

It’s risky to expect one person—a developer, for example—to bring all these skills and answers to the table.  A team—properly facilitated and using an appropriate process—can reduce the risk and reliably produce great designs.