Researching usability research

I’m conducting ethnographic research into how usability analysts regard usability research.

How will I conduct this research?

How will I conduct this research? I’m conducting a form of community-based participatory research, so members of the community—the research subjects—will help me set the questions or lines of inquiry and will influence the research methods. This is appropriate since I’m researching people who research—people who likely have a greater awareness of research epistemology and the range of methods that can be used.

Want to participate?

If you have conducted any usability research at all, and you want to participate, please contact me by commenting. (Look for theimmediately below.) These comments are private. I’ll be at the 2009 UPA conference in Portland this week, June 8-12, if you want to meet in person.

Replies so far: 3. I have slots for only 8 more.

Low-fi sketching increases user input

Here are three techniques for eliciting more feedback on your designs:

  • show users some alternatives, so more than one design.
  • show users a low-fidelity rather than high-fidelity rendering.
  • ask users to sketch their feedback.

To iterate and improve the design, you need honest feedback.  Let’s look at how and why each of these techniques might work.

Showing alternative designs signals that the design process isn’t finished. If you engage in generative design, you’ll have several designs to show to users. Users are apparently reluctant to critique a completed design, so a clear signal that the process is not yet finished encourages users to voice their views, but only somewhat.

Using a low-fidelity rendering elicits more feedback than the same design in a high-fidelity rendering. Again, users are apparently reluctant to critique something that looks finished—as a high-fidelity rendering does.

hi-fi_vs_low-fi_sketching

The design is the same, but it feels more difficult to criticise the one on the right.

Asking users to sketch their feedback turns out to be the single most important factor in eliciting feedback. It’s not known why, because there hasn’t been sufficient published research, but I hypothesize that it’s because this is the most indirect form of criticism.

Where’s the evidence for sketched feedback?

The evidence is unpublished and anecdotal. The problem with unpublished data is that you must be in the right place at the right time to get it, as I was during the UPA 2007 annual conference when Bill Buxton asked the room for a show of hands. Out of about 1000 attendees, several dozen said they had received more and better design-related feedback by asking users to sketch than by eliciting verbal feedback.

When you ask a user: “Tell me how to make this better,” they shrug. When you hand them a pen and paper and ask: “Sketch for me how to make this better,” users start sketching. They suddenly have lots of ideas.

My own experience agrees with this. In Perth, Australia, I took sketches from a Five Sketches™ design session to a customer site for feedback. I also brought blank paper and pens, and asked for sketches of better ideas.

Not surprisingly, the best approach is to combine all three techniques: show users several low-fidelity designs, and then ask them to sketch ways to make the designs better.

Which user involvement works

User-centred design (UCD) advocated involving users in the design process. Have you wondered what form that user involvement could take, and which forms lead to the most successful outcomes?

I recently came across data that Mark Keil published a while ago. He surveyed software companies and correlated project outcomes with the type of user access that designers and developers had.

Type of contact with users Effectiveness
  For custom software projects
  Facilitate teams, hold structured workshop with users, or use joint-application development (JAD). ██████████
  Expose users to a UI prototype or early version to uncover any UI issues. ██████
  Expose users to a prototype or early version to discover the system requirements. ████
  Hold one-on-one semi-structured or open-ended interviews with users. ████
  Test the product internally (acceptance testing rather than QA testing for bugs) to uncover new requirements. ██
  Use an intermediary to define user goals and needs, and to convey them to designers and developers. ██
  Collect user questions, requirements, and problems indirectly, by e-mail or online locations.
  For packaged or mass-market software projects
  Listen to live/synchronous phone support, tech-support, or help-desk calls. ████████
  Hold one-on-one semi-structured or open-ended interviews with users. ██████
  Expose users to a UI prototype or early version to uncover any UI issues. ████
  Convene a group of users, from time to time, to discuss usage and improvements. ████
  Expose users to a prototype or early version to discover the system requirements. ██
  Test the product internally (acceptance testing rather than QA testing for bugs) to uncover new requirements. ██
  Consult marketing and sales people who regularly meet with and listen to customers. ██
  At trade shows, show a mock-up or prototype to users and get their feedback.
Not reported as effective in this 1995 source
  Conduct a (text) survey of a sample of users.
  Conduct a usability test to “tape and measure” users in a formal usability lab. (This study precedes such products as TechSmith Morae.)
  Observe users for an extended period, or conduct ethnographic research.
  Conduct focus groups to discuss the software.

Although Keil’s article includes quantitative data, his samples are small. I opted to show only the relative usefulness of various methods. My descriptions, above, are long because the original article uses 1995 terms that have shifted in meaning. I believe some of the categories now overlap, due to changes in technology and method. For example, getting users to try a prototype of the UI in order to uncover UI issues sounds like the early usability testing that I do with TechSmith Morae, yet the 1995 results gave these activities very different effectiveness ratings.

For details, see the academic article by Mark Keil (Customer-developer links in software development). Educational publishers typically require a fee for access.

These methods also relate to research I’m doing on epistemology of usability analysis.

Epistemology of usability studies

Currently, I’m conducting research on usability analysis and on how Morae software might influence that. My research gaze is rather academic, in that I’m especially interested in the epistemology of usability analysis.

One of my self-imposed challenges is to make my research relevant to usability practitioners. I’m a practitioner and CUA myself, and I have little time for academic exercises because I work where the rubber hits the road. This blog post outlines what I’m up to.

At Simon Fraser University, I learned that epistemological approaches have different assumptions about what is knowable. On one side (below, left), it’s about numbers, rates, percentages, graphs, grids, tables, proving absolute truths. On the other side, (below, right) it’s about seeking objectivity while knowing that it’s impossible because everything has a cultural context. The epistemology you choose, when doing research, depends on what you believe. And the epistemology dictates what methods you use, and how you report your results.

You can be
certain of
what you know.
You cannot be
objective about
what you know.

Let’s look at some examples.

Study 1 fits with the view (above, left) that “you can be certain of what you know.” I plan and conduct a quantitative study to measure the time it takes a series of users to complete two common tasks in a software package: upgrading to the latest version of the software, and activating the software. I make appointments with users. In my workplace, I give each user a scenario and a computer. I observe them and time them as they complete the tasks by using the software package. My hope is that statistical analysis will give me results that I can report, including the average time on task with error bars, as the graph (right) illustrates.

Study 2 fits with the view (above, right) that “you cannot be objective about what you know” because all research takes place within a context. To lessen the impact of conducting research, I contact users to ask if I can study their workplace. I observe each user for a day. My hope is to analyse the materials and interaction that I’ve observed in context—complete with typical interruptions, distractions, and stimuli. Since a new software version has just been released, my hope is that I’ll get to observe them as they upgrade. I’ll report any usability issues, interaction-design hurdles, and unmet needs that I observe.

The above are compilations of studies I conducted.

  • Study 1 revealed several misunderstandings and installation problems, including a user who abandoned the installation process because he believed it was complete. I was able to report the task success rate and have the install wizard fixed.
  • Study 2 revealed that users write numbers on paper and then re-enter them elsewhere, which had not been observed when users visited our site for usability testing. One user told me: “I never install  the latest version because the updates can be unstable,” and another said: “I only upgrade if there’s a fix for a feature I use” to avoid unexpected new defects. I was able to report the paper-based workaround and the users’ feelings about quality, for product managers to reflect in future requirements.

Clearly, there’s more than one way to conduct research, and not every method fits every team. That’s an idea that can be explored at length.

This has me wondering: which method fits what, when, where? Is there a relationship between a team’s development process and the approach to user research (epistemology) that it’s willing to embrace? …between its corporate usability maturity and the approach?

Those are two of the lines of inquiry in my research at Simon Fraser University.

If you liked this post, you may also like Are usability studies experiments?