Photos help user personas succeed

If your user persona includes an image, which type of image helps the team produce designs that are more usable?

frank-long-style-user-persona-pic

The illustration on the left?  Or the photo on the right?

According to Frank Long’s research paper, Real or Imaginary: The effectiveness of using personas in product designphotos are better than illustrations. Teams whose user personas include a photograph of the persona produce designs that rate higher when assessed with Nielsen’s heuristics for UI design.

Frank Long compared the design output of three groups, drawn from his students at National College of Art and Design (NCAD) in Ireland, in a specific design project. Over the five-week project, two groups used user personas of different formats. One group was the control group, so they worked without user personas. The experiment looked for differences in the heuristic assessments of their designs.

Photos—versus illustrations—are one of the ways I’ve engaged project teams with the user personas that I researched and wrote for them. Here’s a teaser:

A card game to help an Agile software team learn about their product’s user personas.

Epistemology of usability studies

Currently, I’m conducting research on usability analysis and on how Morae software might influence that. My research gaze is rather academic, in that I’m especially interested in the epistemology of usability analysis.

One of my self-imposed challenges is to make my research relevant to usability practitioners. I’m a practitioner and CUA myself, and I have little time for academic exercises because I work where the rubber hits the road. This blog post outlines what I’m up to.

At Simon Fraser University, I learned that epistemological approaches have different assumptions about what is knowable. On one side (below, left), it’s about numbers, rates, percentages, graphs, grids, tables, proving absolute truths. On the other side, (below, right) it’s about seeking objectivity while knowing that it’s impossible because everything has a cultural context. The epistemology you choose, when doing research, depends on what you believe. And the epistemology dictates what methods you use, and how you report your results.

You can be
certain of
what you know.
You cannot be
objective about
what you know.

Let’s look at some examples.

Study 1 fits with the view (above, left) that “you can be certain of what you know.” I plan and conduct a quantitative study to measure the time it takes a series of users to complete two common tasks in a software package: upgrading to the latest version of the software, and activating the software. I make appointments with users. In my workplace, I give each user a scenario and a computer. I observe them and time them as they complete the tasks by using the software package. My hope is that statistical analysis will give me results that I can report, including the average time on task with error bars, as the graph (right) illustrates.

Study 2 fits with the view (above, right) that “you cannot be objective about what you know” because all research takes place within a context. To lessen the impact of conducting research, I contact users to ask if I can study their workplace. I observe each user for a day. My hope is to analyse the materials and interaction that I’ve observed in context—complete with typical interruptions, distractions, and stimuli. Since a new software version has just been released, my hope is that I’ll get to observe them as they upgrade. I’ll report any usability issues, interaction-design hurdles, and unmet needs that I observe.

The above are compilations of studies I conducted.

  • Study 1 revealed several misunderstandings and installation problems, including a user who abandoned the installation process because he believed it was complete. I was able to report the task success rate and have the install wizard fixed.
  • Study 2 revealed that users write numbers on paper and then re-enter them elsewhere, which had not been observed when users visited our site for usability testing. One user told me: “I never install  the latest version because the updates can be unstable,” and another said: “I only upgrade if there’s a fix for a feature I use” to avoid unexpected new defects. I was able to report the paper-based workaround and the users’ feelings about quality, for product managers to reflect in future requirements.

Clearly, there’s more than one way to conduct research, and not every method fits every team. That’s an idea that can be explored at length.

This has me wondering: which method fits what, when, where? Is there a relationship between a team’s development process and the approach to user research (epistemology) that it’s willing to embrace? …between its corporate usability maturity and the approach?

Those are two of the lines of inquiry in my research at Simon Fraser University.

If you liked this post, you may also like Are usability studies experiments?

Standard OK-Cancel button order

I have two stories about command buttons.

Quite a few years ago, a team member walked me through a new dialog box. He entered some data, and then unintentionally clicked the Cancel button. He made this error twice in a row, thus losing his changes twice in a row. I pointed out that the OK and Cancel buttons were in the wrong order. The developer switched the buttons to the Windows-standard layout (below, right), and the user-performance problem was solved.

A few years later, on a different project, not only were the buttons in non-standard order, they used non-standard wording and they used coloured icons. My request to follow the Windows standard was met only half-way and then sent for Beta testing before I saw it again. The buttons were now in the correct order, but the button names were changed, and the names and icons were still non-standard. Beta testers loudly protested the change. (Beta testers are often expert users, and experts abhor any change that slows them down.) At the time, the company was only a few steps up the Neilsen Corporate Usability Maturity model, so instead of completing the change to Windows-standard OK and Cancel buttons, the buttons were rolled back, to appease the protesting Beta users. I found out too late to retest with Windows-standard buttons, so there was no data to convince the developers. For me, it was an opportunity to learn from failure. :)

Why is non-standard so hard?

Try this Stroop test (right). Ignore the words. Instead, identify the colours, out loud. No doubt, the second panel went slower and took more effort.

Try the variation, at left. Find the first occurrence of the word Blue. Next, find the first occurrence of the colour .

Just as mismatches between text and colour slow your Stroop-test performance, mismatches between standard and non-standard OK and Cancel buttons slow user performance. Our Beta users clicked the wrong buttons—a huge waste of their time—because the new solution didn’t follow any standard. The Beta testers were right to protest, but wrong in their demand to revert to the original non-standard state. (See: Customers can’t do your job.)

Users learn GUI patterns—patterns that are widely reinforced by user experience—and users expect GUI to behave predictably, so it’s unwise to deviate radically from the standards, unless there are product-management reasons to do so.

I’ll write more about following standards versus designing something new in the coming few posts.

P.S. It looks like Jakob Nielsen got here before me.

From napkin to Five Sketches™

In 2007, a flash of insight hit me, which led to the development of the Five Sketches™ method for small groups who need to design usable software. Looking back, it was an interesting journey.

The setting. I was working on a two-person usability team faced with six major software- and web products to support. We were empowered to do usability, but not design. At the time, the team was in the early stages of Nielsen’s Corporate Usability Maturity model. Design, it was declared, would be the responsibility of the developers, not the usability team. I was faced with this challenge:

How to get usable products
from software- and web developers
by using a method that is
both reliable and repeatable.

The first attempt. I introduced each development team to the usability basics: user personas, requirements, paper prototyping, heuristics, and standards. Some developers went for usability training. In hindsight, it’s easy to see that none of this could work without a formal design process in place.

The second attempt. I continued to read, to listen, and to ask others for ideas. The answer came as separate pieces, from different sources. For several months, I was fumbling in the metaphorical dark, having no idea that the answer was within reach. Then, after a Microsoft product launch on Thursday, 18 October, 2007, the light went on. While sitting on a bar stool, the event’s guest speaker, GK Vanpatter, mapped out an idea for me on a cocktail napkin:

  1. Design requires three steps.
  2. Not everyone is comfortable with each of those steps.
  3. You have to help them.
Some key design ideas, conveyed to me on a napkin sketch in a Vancouver bar.

The quadrants are the conative preferences or preferred problem-solving styles.

I recognised that I already had an answer to step 3, because I’d heard Bill Buxton speak at the 2007 UPA conference, four months earlier. I could help developers be comfortable designing by asking them to sketch.

It was more easily said than done. Everyone on that first team showed dedication and courage. We had help from a Vancouver-based process expert who skilfully debriefed each of us and then served us a summary of remaining problems to iron out. And, when we were done, we had the beginnings of an ideation-and-design method.

Since then, it’s been refined with additional teams of design participants, and it will be refined further—perhaps changed significantly to suit changing circumstances. But that’s the story of the first year.

Complicated GUI is fixable

According to usability guru Jakob Nielsen, the worst mistakes in GUI design are domain-specific. Usually, he says, applications fail because they:

  • solve the wrong problem.
  • have the wrong features for the right problem.
  • make the right features too complicated to understand.

Nielsen’s last point reminds me of what a product manager once told me: many users of highly specialised software think of themselves as experts, but only few are. His hypothesis? Elaborate sets of features are too numerous or complex to learn fully.

Cookie on a plateOne of my projects involved software for dieticians. The software allowed users to enter a recipe. The software would calculate the nutritive value per portion. Users learned the basic settings for an adequate result. They ignored the extra features that could take into consideration various complex chemical interactions between the recipe ingredients. The extra features—the visual and cognitive complexity—got ignored. Ironically, their very presence increased the likelihood that users would satisfice, or avoid the short-term pain of learning something new. When the product was developed, each extra bit seemed a good idea, and they may also each have helped sell the product. But, good idea or not, those extra features needed to be removed, or hidden from the majority of users, or redesigned.

Resolving the “extra features” problem
  1. If the extra features are superfluous, remove them. Usage data can help identify seldom-used features, and many of our products are capable of collecting usage data, though we currently only collect it after crashes and mainly during Beta testing. However, removing a seldom-used part of an existing feature is a complex decision, and one for the Product Manager to make. The difficulty lies in determining whether a feature would be used more if it were simpler to use. In that case, it may not be superfluous.
  2. If the extra features are used only occasionally by relatively few users, then hide them. The suggested GUI treatment for an occasional-by-few control is to expose it only in the context of a related task. Do not clutter the main application window, menu bar, or the main dialog boxes with controls for occasional-by-few tasks. Hiding the controls for an occasional-by-few task is supported by the Isaacs-Walendowski frequency-commonality grid:
  3.   If the
      feature is…
    Used
    by many
    Used
    by few
    Used
    frequently
    GUI treatment:
    •  Visible.
    •  Few clicks.
    GUI treatment:
    •  Suggested.
    •  Few clicks.
    Used
    occasionally
    GUI treatment:
    •  Suggested.
    •  More clicks.
    GUI treatment:
    •  Hidden.
    •  More clicks.
  4. If the extra feature is to be a core feature, simplify it. I’m talking about a feature that the Product Manager believes would be used frequently or by many if users could figure it out. Burying or hiding such features isn’t the answer. You need to find ways to reduce complexity by designing the interaction well and by organising the GUI well. For this, Five Sketches™ can help.
What are the requirements, really?

All this begs the question: who can tell us which features are the extra features (the features to omit), which ones are occasional-by-few (the features to hide), and which ones are used frequently or by many users (the features on which to focus your biggest design-guns)? Nielsen says “Base your decisions on user research” and then test the early designs with users. He adds:

People don’t want to hear me say that they need to test their UI. And they definitely don’t want to hear that they have to actually move their precious butts to a customer location to watch real people do the work the application is supposed to support. The general idea seems to be that real programmers can’t be let out of their cages. My view is just the opposite: no one should be allowed to work on an application unless they’ve spent a day observing a few end users.  …More.

Conclusion: conduct user research and use what you learn to inform the design.