Starting over on the right problem

If you’re designing a bedpan washer, do you design one that nurses don’t have to wait for?

According to a newspaper report, BC’s Centre for Disease Control, or CDC, found that a British Columbia hospital had

bedpan-cleaning machines that take 13 minutes for each cycle.

If they wanted to ensure each bedpan got returned to the right patient, nurses had to stand by for the duration. […]

The BC CDC found the [bedpan washing machines] to be inconvenient and too time consuming.

As additional disincentive for nurses to wait out the 13 minutes, the newspaper says: “If you don’t load the machine exactly right, they not only don’t work, they sometimes spray aerosolized feces on you when the door is opened.”

Oh dear.
                 Cleaning bedpans by machine

It’s easy to ask pointed questions after the fact, but here goes. Since nurses are too rushed to wait 13 minutes, would ethnographic study of hospitals have identified time pressure as a factor? Did researchers ask how long a nurse could or would wait for a bedpan washer? If the answer is “they won’t wait at all; they’ll go do something else,” then that reframes the design problem:

Can the machine track which bedpan gets returned to whom without relying on a nurse’s memory?

Can the machine clean bedpans so that it doesn’t matter to which patient they are returned?

These are very different design problems to solve. Other possible design questions to have asked:

Can the machine’s design prevent improper loading?

Can the machine not spray fecal matter at the person who opens it? Or, if this problem wasn’t predictable at the outset, Is the machine pleasant to use?

Can the machine be operated correctly by untrained users?

…and so on.

I’ve worked on projects where we thought we had the problem space clearly defined, and then—after exploring the design space and attempting to converge on a solution—realised that we had to redefine the problem and start over. I’d say that happens in about 20% of the projects I work on. I’ve also worked on a health product where we couldn’t change the hardware component, so we had to design a software and website solution to mitigate the hardware’s intermittent connectivity problem.

I don’t know anything about the design of the bedpan washer, above, but I understand that BC CDC implicated bedpans in the hospital’s outbreak of Clostridium difficile, and that the hospital switched to another cleaning method. The costs to the manufacturer are potentially horrendous. If the design team did everything right—including an iterative design process and early, user-involved testing—and still missed the mark, then they have my sympathy.

But now that they have a better understanding of the problem space—now that they know the “right” problem—they can design and build a better product.

Informing what you design and build

I was recently invited to join a design-specification review for a feature I’ll call Feature X.

As I listened to the presentation, I thought: “There are pieces missing from this spec.” When the time came for questions, I asked about the project’s scope. “Your spec is titled Feature X, but I see very little X described in this document. What does X mean to you?” Sure enough, there was a gap between the title of the design specification and the content of the design specification. And the gap was deliberate, on the part of the Development team.

What we're building

The company in question makes software, not cars or bicycles, but the gap between the spec’s title page and the spec’s content was just as great as the one in the illustration. The company’s potential customers say they want Feature X. The Development team says they only have the resources to build Feature Non-X. Non-X is missing some of the key features that define the X experience.

Except for its sleight-of-hand usefulness to sales staff, Feature Non-X may be a non-starter. But there’s one more thing to tell you:

  • Customers say they want Feature X, but the vast majority of users who already have Feature X, don’t use it.

Apparently—I say “apparently” because the evidence is anecdotal—one reason customers who have a competitor’s X don’t use it is because X is complicated to set up and complicated to use. This is, of course, a golden opportunity to make a simple, usable feature that provides only what customers will use.

If this small company is lucky, their Feature Non-X will sell well and the company will leap-frog their Feature-X competitor. With a little marketing- or ethnographic research, the company would have some certainty about why Feature X is requested but not used—and the team would have information to help them design Non-X. Unfortunately, a lack of resources may leave the team’s designer and developers guessing, and the company will have to take this uncertainty in stride.

This sugar packet is a movie

Whether it’s ethnographic research, usability research, or marketing research, I’ve learned that the best insights aren’t always gleaned from scheduled research.

Here’s a photo of impromptu research, conducted by Betsy Weber, TechSmith’s product evangelist. I was her research subject. Betsy recorded me pushing sugar packets around a table as I explained how I’d like Camtasia to behave.

Jerome demos an idea to Betsy. Photo by Mastermaq

Betsy takes information like this from the field back to the Camtasia team. There’s no guarantee that my idea will influence future product development, but what this photo shows is that TechSmith listens to its users and customers.

The ongoing stream of research and information that Betsy provides ensures better design of products that will be relevant and satisfying for TechSmith customers down the line.