Standard OK-Cancel button order

I have two stories about command buttons.

Quite a few years ago, a team member walked me through a new dialog box. He entered some data, and then unintentionally clicked the Cancel button. He made this error twice in a row, thus losing his changes twice in a row. I pointed out that the OK and Cancel buttons were in the wrong order. The developer switched the buttons to the Windows-standard layout (below, right), and the user-performance problem was solved.

A few years later, on a different project, not only were the buttons in non-standard order, they used non-standard wording and they used coloured icons. My request to follow the Windows standard was met only half-way and then sent for Beta testing before I saw it again. The buttons were now in the correct order, but the button names were changed, and the names and icons were still non-standard. Beta testers loudly protested the change. (Beta testers are often expert users, and experts abhor any change that slows them down.) At the time, the company was only a few steps up the Neilsen Corporate Usability Maturity model, so instead of completing the change to Windows-standard OK and Cancel buttons, the buttons were rolled back, to appease the protesting Beta users. I found out too late to retest with Windows-standard buttons, so there was no data to convince the developers. For me, it was an opportunity to learn from failure. :)

Why is non-standard so hard?

Try this Stroop test (right). Ignore the words. Instead, identify the colours, out loud. No doubt, the second panel went slower and took more effort.

Try the variation, at left. Find the first occurrence of the word Blue. Next, find the first occurrence of the colour .

Just as mismatches between text and colour slow your Stroop-test performance, mismatches between standard and non-standard OK and Cancel buttons slow user performance. Our Beta users clicked the wrong buttons—a huge waste of their time—because the new solution didn’t follow any standard. The Beta testers were right to protest, but wrong in their demand to revert to the original non-standard state. (See: Customers can’t do your job.)

Users learn GUI patterns—patterns that are widely reinforced by user experience—and users expect GUI to behave predictably, so it’s unwise to deviate radically from the standards, unless there are product-management reasons to do so.

I’ll write more about following standards versus designing something new in the coming few posts.

P.S. It looks like Jakob Nielsen got here before me.

Users are not used to it

For several years, I did usability testing on CAD-style software that was full of legacy code, some of which preceded Windows 98.

Some of that legacy code dealt with CAD objects that displayed on screen. To work with these objects, users had a choice of menu commands and toolbar buttons, supplemented by dialog boxes. For example, to move an object, users could not simply click and drag it; they would choose a command, click the object, and then, in a dialog box, enter the distance to move the object.

That’s the way CAD programs worked when that legacy code was originally written.

Over the years, during my usability testing of various features, I noticed a growing trend toward direct manipulation. That is, to work with an object, users would try to click it or drag it. They would do this without thinking. Even long-time users, faced with a new feature (studies from 2005-2006), would try direct manipulation first:

  • 100% of the test subjects clicked a cube, trying to select it.
  • 100% of the test subjects dragged a point or line, trying to move it.
  • 100% of the test subjects clicked in the window, trying to create a point.
  • 100% dragged across points, trying to select them.

But the new features were built on the legacy code, so had the command-driven interaction style. A simple click on the object was usually a dead end for users.

And the users would say: “Darn,” and then look for another vector—another pathway—to complete the task.

The reasons we didn’t provide direct manipulation:

  • “Our users are used to the way it is now.” Clearly, usability-test results negated that argument. Users are not so accustomed to old-style interaction, because their first instinct for new CAD tasks in an existing product was direct manipulation.
  • “There’s not enough bang for the buck” because the opportunity cost (the cost of skipping other possible projects) was deemed too great. It’s hard to argue with this, as a usability analyst. The company opted for more features, and may have increased its risk of being leapfrogged by the competition, as discussed in an earlier blog.

Your usability advantage

When businesses buy software, rather than choose the software with the lowest purchase price, they ought to consider the total cost of ownership—including the added productivity and enjoyment that usability and user-experience provide.

Every software company will say “our product is usable,” so how can you prove to prospective customer that you’ve really got usability?

Your product has a usability advantage if:

  1. Your development team’s motivation is right. Software meets customer business needs if came out of a design and development process that considers stakeholders beyond the development team.
  2. Incidentally, getting the motivation right is what Five Sketches™ was designed to help development teams to do.

  3. Trials quickly reveal product effectiveness. In a hands-on trial, you want users to try common tasks, figure them out, and say they liked the experience. A good hands-on trial reduces a competitor’s vendor demo to an infomercial.
  4. It’s about information more than data. Data requires cognitive transformation in the user’s head to become information. Information is ready now to support insight and appropriate action.
  5. Change management is minimal. Your mental model is clearly evident and the user experience is pleasant, so resistance to change is lower. Employees will see evidence of leadership rather than another “solution” imposed on them.
  6. Your training teaches skills. Pick one: training that leads users through a maze (an unusable-interface), or training that teaches users smarter ways to work toward their goals.
  7. You have metrics. If you tell customers how long it will take new users to start performing, you show your respect for their total cost of ownership.
  8. You have references. A product reference is as close as a Google search. In a web-2.0 world, your best “reference” could be an engaged, loyal user community.

The first 4½ to 5 points, above, require the Development team’s involvement, and the last few benefit from Dev involvement. Clearly, a usability advantage requires the involvement of other departments, directed by a product manager who works the Marketing, Sales, Support, and Development teams in concert. :)

This post was inspired by a Howard Hambrose article in Baseline Magazine, which recommends that IT professionals question software usability before they buy and implement.

Heuristics at the design stage

On the IxDA’s discussion list for interaction designers, Liam Greig posted his “human friendly” version of a heuristics checklist based on Nielson’s originals and the ISO’s ergonomics of human-system interactions.

Here are  just the headings and the human-friendly questions, which are useful at a project’s design stage.

A design should be…

  • transparent. Ask: Where am I? What are my options?
  • responsive. Ask: What is happening right now? Am I getting what I need?
  • considerate. Ask: Does this make sense to me?
  • supportive. Ask: Can I focus on my task? Do I feel frustrated?
  • consistent. Ask: Are my expectations accurate?
  • forgiving. Ask: Are mistakes easy to fix? Does the technology blame me for errors?
  • guiding. Ask: Do I know where to go for help?
  • accommodating. Ask: Am I in control? Am I afraid to make mistakes?
  • flexible. Ask: Can I customize my experience?
  • intelligent. Ask: Does the technology know who I am? Did the technology remember the way I left things?

Sometimes, at the analysis end of a Five Sketches™ ideation-design session, the design participants see more than one path forward. Use heuristics to frame the discussion of how each path rates, to reach a decision faster.

The heuristics can be about more than just usability. You can also assess coding costs, maintenance costs, code stability…. And, of course, you must also assess potential designs against the project’s requirements.

From napkin to Five Sketches™

In 2007, a flash of insight hit me, which led to the development of the Five Sketches™ method for small groups who need to design usable software. Looking back, it was an interesting journey.

The setting. I was working on a two-person usability team faced with six major software- and web products to support. We were empowered to do usability, but not design. At the time, the team was in the early stages of Nielsen’s Corporate Usability Maturity model. Design, it was declared, would be the responsibility of the developers, not the usability team. I was faced with this challenge:

How to get usable products
from software- and web developers
by using a method that is
both reliable and repeatable.

The first attempt. I introduced each development team to the usability basics: user personas, requirements, paper prototyping, heuristics, and standards. Some developers went for usability training. In hindsight, it’s easy to see that none of this could work without a formal design process in place.

The second attempt. I continued to read, to listen, and to ask others for ideas. The answer came as separate pieces, from different sources. For several months, I was fumbling in the metaphorical dark, having no idea that the answer was within reach. Then, after a Microsoft product launch on Thursday, 18 October, 2007, the light went on. While sitting on a bar stool, the event’s guest speaker, GK Vanpatter, mapped out an idea for me on a cocktail napkin:

  1. Design requires three steps.
  2. Not everyone is comfortable with each of those steps.
  3. You have to help them.
Some key design ideas, conveyed to me on a napkin sketch in a Vancouver bar.

The quadrants are the conative preferences or preferred problem-solving styles.

I recognised that I already had an answer to step 3, because I’d heard Bill Buxton speak at the 2007 UPA conference, four months earlier. I could help developers be comfortable designing by asking them to sketch.

It was more easily said than done. Everyone on that first team showed dedication and courage. We had help from a Vancouver-based process expert who skilfully debriefed each of us and then served us a summary of remaining problems to iron out. And, when we were done, we had the beginnings of an ideation-and-design method.

Since then, it’s been refined with additional teams of design participants, and it will be refined further—perhaps changed significantly to suit changing circumstances. But that’s the story of the first year.

Functional sophistication, not complexity

Some software companies add ever more features to their software as a way to differentiate it from its competitors. Lucinio Santos’ lengthy analysis of sophistication versus complexity includes this graphic:

functional-sophistication-not-complexity

An excellent example of simplification is the Microsoft Office ribbon. Many users who upgrade dislike the ribbon for months because of the sheer amount of GUI change it imposes, but the ribbon successfully simplifies and makes existing features more discoverable.

Incidentally, the Office ribbon was designed by a design team using generative design. I facilitated a ribbon-design project that used a team of developers Five Sketches™—a method that incorporates a generative design.

Complicated GUI is fixable

According to usability guru Jakob Nielsen, the worst mistakes in GUI design are domain-specific. Usually, he says, applications fail because they:

  • solve the wrong problem.
  • have the wrong features for the right problem.
  • make the right features too complicated to understand.

Nielsen’s last point reminds me of what a product manager once told me: many users of highly specialised software think of themselves as experts, but only few are. His hypothesis? Elaborate sets of features are too numerous or complex to learn fully.

Cookie on a plateOne of my projects involved software for dieticians. The software allowed users to enter a recipe. The software would calculate the nutritive value per portion. Users learned the basic settings for an adequate result. They ignored the extra features that could take into consideration various complex chemical interactions between the recipe ingredients. The extra features—the visual and cognitive complexity—got ignored. Ironically, their very presence increased the likelihood that users would satisfice, or avoid the short-term pain of learning something new. When the product was developed, each extra bit seemed a good idea, and they may also each have helped sell the product. But, good idea or not, those extra features needed to be removed, or hidden from the majority of users, or redesigned.

Resolving the “extra features” problem
  1. If the extra features are superfluous, remove them. Usage data can help identify seldom-used features, and many of our products are capable of collecting usage data, though we currently only collect it after crashes and mainly during Beta testing. However, removing a seldom-used part of an existing feature is a complex decision, and one for the Product Manager to make. The difficulty lies in determining whether a feature would be used more if it were simpler to use. In that case, it may not be superfluous.
  2. If the extra features are used only occasionally by relatively few users, then hide them. The suggested GUI treatment for an occasional-by-few control is to expose it only in the context of a related task. Do not clutter the main application window, menu bar, or the main dialog boxes with controls for occasional-by-few tasks. Hiding the controls for an occasional-by-few task is supported by the Isaacs-Walendowski frequency-commonality grid:
  3.   If the
      feature is…
    Used
    by many
    Used
    by few
    Used
    frequently
    GUI treatment:
    •  Visible.
    •  Few clicks.
    GUI treatment:
    •  Suggested.
    •  Few clicks.
    Used
    occasionally
    GUI treatment:
    •  Suggested.
    •  More clicks.
    GUI treatment:
    •  Hidden.
    •  More clicks.
  4. If the extra feature is to be a core feature, simplify it. I’m talking about a feature that the Product Manager believes would be used frequently or by many if users could figure it out. Burying or hiding such features isn’t the answer. You need to find ways to reduce complexity by designing the interaction well and by organising the GUI well. For this, Five Sketches™ can help.
What are the requirements, really?

All this begs the question: who can tell us which features are the extra features (the features to omit), which ones are occasional-by-few (the features to hide), and which ones are used frequently or by many users (the features on which to focus your biggest design-guns)? Nielsen says “Base your decisions on user research” and then test the early designs with users. He adds:

People don’t want to hear me say that they need to test their UI. And they definitely don’t want to hear that they have to actually move their precious butts to a customer location to watch real people do the work the application is supposed to support. The general idea seems to be that real programmers can’t be let out of their cages. My view is just the opposite: no one should be allowed to work on an application unless they’ve spent a day observing a few end users.  …More.

Conclusion: conduct user research and use what you learn to inform the design.

Choose: usability or features?

I was talking to a B2B product manager who told me “The industry we target sees little difference between our product and our competitors.” Their plan is to differentiate their product from its competitors. My question: to make your software different from that of the competition, should you mainly add new functionality or mainly improve the usability?

LeapfroggingBob Holt addresses this question in his article, Death by 1000 cuts. He asks: “As the worlds of our customers and our own business models continue to change and evolve, should we be changing the balance between improving the usability of our current products and adding new functionality?” Holt answers his own question: “I say absolutely yes. After all, it wouldn’t it be a shame if our revenues bled out through a thousand little cuts while we are rushing around trying to build the next Big Thing?”

But the same product manager I mentioned above also told me: “I firmly believe that you cannot win only by addressing usability. You must also look for the future of the market—that is, you won’t innovate past your current box—to avoid getting leapfrogged.”