Functional sophistication, not complexity

Some software companies add ever more features to their software as a way to differentiate it from its competitors. Lucinio Santos’ lengthy analysis of sophistication versus complexity includes this graphic:

functional-sophistication-not-complexity

An excellent example of simplification is the Microsoft Office ribbon. Many users who upgrade dislike the ribbon for months because of the sheer amount of GUI change it imposes, but the ribbon successfully simplifies and makes existing features more discoverable.

Incidentally, the Office ribbon was designed by a design team using generative design. I facilitated a ribbon-design project that used a team of developers Five Sketches™—a method that incorporates a generative design.

Blended usability approach “best”

I received a brochure in the mail, titled Time for a Tune-up: How improving usability can improve the fortunes of your web site. It recommends this blend of usability methods:

  • A blended approachExpert reviews focus on workflows and give best results when the scope is clearly defined.
  • Usability studies are more time-consuming than expert reviews, but is the best way to uncover real issues.
  • Competitor benchmarking looks at the wider context.

The brochure is written by online-marketing consultants, and with Web sites in mind, but its content is also relevant to other development activities.

Effectiveness of usability evaluation

Do you ever wonder how effective expert reviews and usability tests are? Apparently, they can be pretty good.

Rolf Molich and others have conducted a series of comparative usability evaluation (CUE) studies, in which a number of teams evaluate the same version of a web site or application. The teams chose their own preferred methods—such as an expert review or a usability test. Their reports were then evaluated and compared by a small group of experts.

What the first six CUE studies found

About 30% of reported problems were found by multiple teams. The remaining 70% were found by a single team only. Fortunately, of that 70%, only 12% were serious problems. In one CUE study, the average number of reported problems was 9, so a team would be overlooking 1 or 2 serious problems. The process isn’t perfect, but teams found 80% or more of the serious problems.

Teams that used usability testing found more usability problems than expert reviewers. However, expert reviewers are more productive-they found more issues per hour-as this graph from the fourth CUE study illustrates:

CUE study 4 results

Teams that found the most usability problems (over 15 when the average was 9) needed much more time than the other teams, as illustrated in the above graph. Apparently, finding the last few problems takes up the most time.

The CUE studies do not consider the politics of usability and software development. Are your developers sceptical of usability benefits? Usability studies provide credibility-boosting video as visual evidence. Are your developers using an Agile method? Expert reviews provide quick feedback for each iteration.

To learn more about comparative usability evaluation, read about the findings of all six CUE studies.

Complicated GUI is fixable

According to usability guru Jakob Nielsen, the worst mistakes in GUI design are domain-specific. Usually, he says, applications fail because they:

  • solve the wrong problem.
  • have the wrong features for the right problem.
  • make the right features too complicated to understand.

Nielsen’s last point reminds me of what a product manager once told me: many users of highly specialised software think of themselves as experts, but only few are. His hypothesis? Elaborate sets of features are too numerous or complex to learn fully.

Cookie on a plateOne of my projects involved software for dieticians. The software allowed users to enter a recipe. The software would calculate the nutritive value per portion. Users learned the basic settings for an adequate result. They ignored the extra features that could take into consideration various complex chemical interactions between the recipe ingredients. The extra features—the visual and cognitive complexity—got ignored. Ironically, their very presence increased the likelihood that users would satisfice, or avoid the short-term pain of learning something new. When the product was developed, each extra bit seemed a good idea, and they may also each have helped sell the product. But, good idea or not, those extra features needed to be removed, or hidden from the majority of users, or redesigned.

Resolving the “extra features” problem
  1. If the extra features are superfluous, remove them. Usage data can help identify seldom-used features, and many of our products are capable of collecting usage data, though we currently only collect it after crashes and mainly during Beta testing. However, removing a seldom-used part of an existing feature is a complex decision, and one for the Product Manager to make. The difficulty lies in determining whether a feature would be used more if it were simpler to use. In that case, it may not be superfluous.
  2. If the extra features are used only occasionally by relatively few users, then hide them. The suggested GUI treatment for an occasional-by-few control is to expose it only in the context of a related task. Do not clutter the main application window, menu bar, or the main dialog boxes with controls for occasional-by-few tasks. Hiding the controls for an occasional-by-few task is supported by the Isaacs-Walendowski frequency-commonality grid:
  3.   If the
      feature is…
    Used
    by many
    Used
    by few
    Used
    frequently
    GUI treatment:
    •  Visible.
    •  Few clicks.
    GUI treatment:
    •  Suggested.
    •  Few clicks.
    Used
    occasionally
    GUI treatment:
    •  Suggested.
    •  More clicks.
    GUI treatment:
    •  Hidden.
    •  More clicks.
  4. If the extra feature is to be a core feature, simplify it. I’m talking about a feature that the Product Manager believes would be used frequently or by many if users could figure it out. Burying or hiding such features isn’t the answer. You need to find ways to reduce complexity by designing the interaction well and by organising the GUI well. For this, Five Sketches™ can help.
What are the requirements, really?

All this begs the question: who can tell us which features are the extra features (the features to omit), which ones are occasional-by-few (the features to hide), and which ones are used frequently or by many users (the features on which to focus your biggest design-guns)? Nielsen says “Base your decisions on user research” and then test the early designs with users. He adds:

People don’t want to hear me say that they need to test their UI. And they definitely don’t want to hear that they have to actually move their precious butts to a customer location to watch real people do the work the application is supposed to support. The general idea seems to be that real programmers can’t be let out of their cages. My view is just the opposite: no one should be allowed to work on an application unless they’ve spent a day observing a few end users.  …More.

Conclusion: conduct user research and use what you learn to inform the design.