Designing *with* developers

Today, Joel Spolsky blogged about development process and design. He makes a couple of points I agree with. As an example, he says that developers don’t know how to do everything. He says it first by describing his lack of skills in an early job at Microsoft, and later by describing the lack of skills in very experienced developers:

Joel SpolskyYour garden-variety super-smart programmer is going to come up with a completely baffling user interface that makes perfect sense IF YOU’RE A VULCAN (cf. git). The best programmers are notoriously brilliant, and have some trouble imagining what it must be like not to be able to memorize 16 one-letter command line arguments. These programmers then have a tendency to get attached to their first ideas, especially when they’ve already written the code.

At this point, reading Spolsky’s blog, I’m like the eager school kid with his hand up: “Oh, oh, I know the answer!” Five Sketches™ was specifically developed to prevent the people-attached-to-their-first-ideas problem and to ensure collaboration between developers and other members of the team.

Software products are developed by teams, but, as Spolsky goes on to say—about developers—some team members have more power those who work alongside them, because developers control the code. He tells this story:

Joel Spolsky       A programmer asks me to intervene in some debate he is having with a [non-developer peer; in this story it’s a Program Manager who performs a design function].
       “Who is going to write the code?” I asked.
       “I am….”
       “OK, who checks things into source control?”
       “Me, I guess, ….”
       “So what’s the problem, exactly?” I asked. “You have absolute control over the state of each and every bit in the final product. What else do you need? A tiara?”

Spolsky’s solution is to rely on the ability of others [in this case, the Program Manager] to convince the developer to make necessary changes. But requiring stakeholders and colleagues to engage in persuasion is risky; the variables include not only technical skill and experience, but also credibility, and rhetoric, persuasion, and interpersonal communication skills of both parties.

I recall when we first developed our ideation-design process, on an interdisciplinary team. We had Sharon to debrief each of us, to find out what had worked and what hadn’t, and then we addressed the shortcomings and frustrations in the process. This is how we developed Five Sketches™. And this is why it works … with developers … and with people from QA, Marketing, Tech-Comm, and so on.

By the way, you can read Joel Spolsky’s entire blog post here.

Customers can’t do your job

Agile methodology can produce usable products, as long as you know what you’re doing. A common pitfall in agile is this incorrect assumption that you’ll get a usable product simply by building what the client tells you to build.

When there is some question about how to make a feature usable, customers may have something to say, but their answers are more likely based on opinion and emotion rather than on design experience and behavioural observation.

Brazilian blogger and computer-science- and HCI expert, Francisco Trindade, gives this illustration: If you had asked people how they would like to search for Web pages in the pre-Google era, how many would have asked for a blank page with a text box on it…?

Can customers design? Probably not.Trindade says that, regardless of whether this ask-the-client behaviour is laziness or a responsibility-avoidance strategy, people who design software need to “stop pretending that the client has all the answers, and trust a little bit more in themselves to create solutions.”

Creating solutions? That’s a job for developers and the Five-Sketches™ method, or any other design method they’re comfortable with.

From napkin to Five Sketches™

In 2007, a flash of insight hit me, which led to the development of the Five Sketches™ method for small groups who need to design usable software. Looking back, it was an interesting journey.

The setting. I was working on a two-person usability team faced with six major software- and web products to support. We were empowered to do usability, but not design. At the time, the team was in the early stages of Nielsen’s Corporate Usability Maturity model. Design, it was declared, would be the responsibility of the developers, not the usability team. I was faced with this challenge:

How to get usable products
from software- and web developers
by using a method that is
both reliable and repeatable.

The first attempt. I introduced each development team to the usability basics: user personas, requirements, paper prototyping, heuristics, and standards. Some developers went for usability training. In hindsight, it’s easy to see that none of this could work without a formal design process in place.

The second attempt. I continued to read, to listen, and to ask others for ideas. The answer came as separate pieces, from different sources. For several months, I was fumbling in the metaphorical dark, having no idea that the answer was within reach. Then, after a Microsoft product launch on Thursday, 18 October, 2007, the light went on. While sitting on a bar stool, the event’s guest speaker, GK Vanpatter, mapped out an idea for me on a cocktail napkin:

  1. Design requires three steps.
  2. Not everyone is comfortable with each of those steps.
  3. You have to help them.

The quadrants are the conative preferences or preferred problem-solving styles.

I recognised that I already had an answer to step 3, because I’d heard Bill Buxton speak at the 2007 UPA conference, four months earlier. I could help developers be comfortable designing by asking them to sketch.

It was more easily said than done. Everyone on that first team showed dedication and courage. We had help from a Vancouver-based process expert who skilfully debriefed each of us and then served us a summary of remaining problems to iron out. And, when we were done, we had the beginnings of an ideation-and-design method.

Since then, it’s been refined with additional teams of design participants, and it will be refined further—perhaps changed significantly to suit changing circumstances. But that’s the story of the first year.

Failure, then sketching success

Developing Five Sketches™ came with its share of challenges. Fortunately, all obstacles were overcome. Some of those obstacles were even on the official Ways To Fail list in Seth Godin’s book, The Big Moo.

I’m glad I didn’t have to face them all:

  • Keep swimmingKeep secrets.
  • Set aggressive deadlines for others to get buy in—then change them when they aren’t met.
  • Be certain you’re right and ignore those who disagree with you.
  • Resist testing your theories.
  • Focus more on what people think and less on whether your idea is as good as it could be.
  • Assume that a critical mass must embrace your idea for it to work.
  • Choose an idea where the preceding point is a requirement.
  • Assume that people who don’t instantly get your idea are bullheaded, shortsighted, or even stupid.
  • Don’t bother to dramatically increase the quality of your presentation style.
  • Insist that you’ve got to go straight to the president of the organisation to get something done.
  • Always go for the big win.

Of course this is only funny in retrospect, now that I can see my way through any design challenge. Swimmingly.

Functional sophistication, not complexity

Some software companies add ever more features to their software as a way to differentiate it from its competitors. Lucinio Santos’ lengthy analysis of sophistication versus complexity includes this graphic:

functional-sophistication-not-complexity

An excellent example of simplification is the Microsoft Office ribbon. Many users who upgrade dislike the ribbon for months because of the sheer amount of GUI change it imposes, but the ribbon successfully simplifies and makes existing features more discoverable.

Incidentally, the Office ribbon was designed by a design team using generative design. I facilitated a ribbon-design project that used a team of developers Five Sketches™—a method that incorporates a generative design.

Blended usability approach “best”

I received a brochure in the mail, titled Time for a Tune-up: How improving usability can improve the fortunes of your web site. It recommends this blend of usability methods:

  • A blended approachExpert reviews focus on workflows and give best results when the scope is clearly defined.
  • Usability studies are more time-consuming than expert reviews, but is the best way to uncover real issues.
  • Competitor benchmarking looks at the wider context.

The brochure is written by online-marketing consultants, and with Web sites in mind, but its content is also relevant to other development activities.

Rigid UCD methodology fails?

I received an e-mail from someone at the 2008 IA Summit about Jared Spool’s declaration that UCD is dead:

——Forwarded message——
From: P
Date: Sun 13/04/2008, 2:54 PM

Hi Jerome,

I’m at the iA Summit in Miami right now, and hearing about all of the things that are going on makes me think of you. One of the interesting sessions was Jared Spool’s keynote speech. He conducted research into what makes certain companies better able to produce effective designs. He used this model to talk about the various approaches departments do to facilitate design:

Things that facilitate design

He said all design involves a process, whether it’s been formalized or not. Interesting, though not surprising: companies that have dogmatic UCD leadership or use a rigid UCD methodology are unlikely to create anything innovative. To innovate, you want to apply techniques in sometimes surprising ways to solve problems that they were not intended for (those are the “tricks”.)

OK, I’m going back out in the warm (hot!) weather.

– P

Of course, the lack of process doesn’t guarantee innovation, either, nor does it guarantee you’ll be able to repeat your (accidental) successes. I believe a successful design process must involve some form of generative design—as Five Sketches™ does—that’s based on knowledge of user condition. I also beleive that, once you’ve internalised those two things, you can use almost any form of facilitation to design good products.

Effectiveness of usability evaluation

Do you ever wonder how effective expert reviews and usability tests are? Apparently, they can be pretty good.

Rolf Molich and others have conducted a series of comparative usability evaluation (CUE) studies, in which a number of teams evaluate the same version of a web site or application. The teams chose their own preferred methods—such as an expert review or a usability test. Their reports were then evaluated and compared by a small group of experts.

What the first six CUE studies found

About 30% of reported problems were found by multiple teams. The remaining 70% were found by a single team only. Fortunately, of that 70%, only 12% were serious problems. In one CUE study, the average number of reported problems was 9, so a team would be overlooking 1 or 2 serious problems. The process isn’t perfect, but teams found 80% or more of the serious problems.

Teams that used usability testing found more usability problems than expert reviewers. However, expert reviewers are more productive-they found more issues per hour-as this graph from the fourth CUE study illustrates:

CUE study 4 results

Teams that found the most usability problems (over 15 when the average was 9) needed much more time than the other teams, as illustrated in the above graph. Apparently, finding the last few problems takes up the most time.

The CUE studies do not consider the politics of usability and software development. Are your developers sceptical of usability benefits? Usability studies provide credibility-boosting video as visual evidence. Are your developers using an Agile method? Expert reviews provide quick feedback for each iteration.

To learn more about comparative usability evaluation, read about the findings of all six CUE studies.

Complicated GUI is fixable

According to usability guru Jakob Nielsen, the worst mistakes in GUI design are domain-specific. Usually, he says, applications fail because they:

  • solve the wrong problem.
  • have the wrong features for the right problem.
  • make the right features too complicated to understand.

Nielsen’s last point reminds me of what a product manager once told me: many users of highly specialised software think of themselves as experts, but only few are. His hypothesis? Elaborate sets of features are too numerous or complex to learn fully.

Cookie on a plateOne of my projects involved software for dieticians. The software allowed users to enter a recipe. The software would calculate the nutritive value per portion. Users learned the basic settings for an adequate result. They ignored the extra features that could take into consideration various complex chemical interactions between the recipe ingredients. The extra features—the visual and cognitive complexity—got ignored. Ironically, their very presence increased the likelihood that users would satisfice, or avoid the short-term pain of learning something new. When the product was developed, each extra bit seemed a good idea, and they may also each have helped sell the product. But, good idea or not, those extra features needed to be removed, or hidden from the majority of users, or redesigned.

Resolving the “extra features” problem
  1. If the extra features are superfluous, remove them. Usage data can help identify seldom-used features, and many of our products are capable of collecting usage data, though we currently only collect it after crashes and mainly during Beta testing. However, removing a seldom-used part of an existing feature is a complex decision, and one for the Product Manager to make. The difficulty lies in determining whether a feature would be used more if it were simpler to use. In that case, it may not be superfluous.
  2. If the extra features are used only occasionally by relatively few users, then hide them. The suggested GUI treatment for an occasional-by-few control is to expose it only in the context of a related task. Do not clutter the main application window, menu bar, or the main dialog boxes with controls for occasional-by-few tasks. Hiding the controls for an occasional-by-few task is supported by the Isaacs-Walendowski frequency-commonality grid:
  3.   If the
      feature is…
    Used
    by many
    Used
    by few
    Used
    frequently
    GUI treatment:
    •  Visible.
    •  Few clicks.
    GUI treatment:
    •  Suggested.
    •  Few clicks.
    Used
    occasionally
    GUI treatment:
    •  Suggested.
    •  More clicks.
    GUI treatment:
    •  Hidden.
    •  More clicks.
  4. If the extra feature is to be a core feature, simplify it. I’m talking about a feature that the Product Manager believes would be used frequently or by many if users could figure it out. Burying or hiding such features isn’t the answer. You need to find ways to reduce complexity by designing the interaction well and by organising the GUI well. For this, Five Sketches™ can help.
What are the requirements, really?

All this begs the question: who can tell us which features are the extra features (the features to omit), which ones are occasional-by-few (the features to hide), and which ones are used frequently or by many users (the features on which to focus your biggest design-guns)? Nielsen says “Base your decisions on user research” and then test the early designs with users. He adds:

People don’t want to hear me say that they need to test their UI. And they definitely don’t want to hear that they have to actually move their precious butts to a customer location to watch real people do the work the application is supposed to support. The general idea seems to be that real programmers can’t be let out of their cages. My view is just the opposite: no one should be allowed to work on an application unless they’ve spent a day observing a few end users.  …More.

Conclusion: conduct user research and use what you learn to inform the design.