Researching usability research

I’m conducting ethnographic research into how usability analysts regard usability research.

How will I conduct this research?

How will I conduct this research? I’m conducting a form of community-based participatory research, so members of the community—the research subjects—will help me set the questions or lines of inquiry and will influence the research methods. This is appropriate since I’m researching people who research—people who likely have a greater awareness of research epistemology and the range of methods that can be used.

Want to participate?

If you have conducted any usability research at all, and you want to participate, please contact me by commenting. (Look for theimmediately below.) These comments are private. I’ll be at the 2009 UPA conference in Portland this week, June 8-12, if you want to meet in person.

Replies so far: 3. I have slots for only 8 more.

Are usability studies experiments?

When I conduct usability studies, I use a laptop and Morae to create a portable and low-cost usability lab. I typically visit the participant on site, so I can look around. I provide a scenario that participants can follow as they try using a new software feature. Morae records their voices, facial expressions, on-screen actions, clicks, and typing. The raw data gets saved in a searchable and graphable format. Afterward, I review the recorded data (I rarely have written notes) and make evidence-based recommendations to the feature’s project team.

For example, in one usability study I realised that the order of the steps was causing confusion and doubt. The three-step wizard would disappear after step, so users could modify their data on the screen. Users thought the task was completed with step 2, so the reappearance of the wizard for step 3 caused confusion. I recommended we change the order of the steps:

Change the order of the steps

Changing the order fixed the “confused users” problem. But was this scientific? Aside from the fact that I’m researching human subjects rather than test tubes, am I following a scientific model? At first glance, the usability test I conducted doesn’t seem to follow the positivist model of what constitutes an experiment:

A positivist model

For example, unlike the test-tube lab experiment of a chemist, my usability test had no control group. Also, my report went beyond factual conclusions to actual recommendations (based on my expert opinion) for the next iteration.

I can argue this both ways.

On the one hand, my usability test does have a control group if I take the next iteration of the product and repeat the usability test with additional participants, to see whether my recommendations solved the problem. I could compare the task-completion rates and the task duration.

A model of usability testing

On the other hand, if I were asked to determine whether the product has an “efficient” workflow or a “great” user experience—which are subjective measures—I’d say a positivist-model experiment is inappropriate. To measure a user’s confusion or satisfaction, I might consider their facial expressions, verbal utterances, and self-reported ratings. This calls for a research design whose epistemology is rooted in post-positivist, ethnomethodological, situated, standpoint, or critical approaches, and has more in common with research done by an ethnographer than by a chemist.

If you liked this, you may also like Epistemology of usability studies.

Epistemology of usability studies

Currently, I’m conducting research on usability analysis and on how Morae software might influence that. My research gaze is rather academic, in that I’m especially interested in the epistemology of usability analysis.

One of my self-imposed challenges is to make my research relevant to usability practitioners. I’m a practitioner and CUA myself, and I have little time for academic exercises because I work where the rubber hits the road. This blog post outlines what I’m up to.

At Simon Fraser University, I learned that epistemological approaches have different assumptions about what is knowable. On one side (below, left), it’s about numbers, rates, percentages, graphs, grids, tables, proving absolute truths. On the other side, (below, right) it’s about seeking objectivity while knowing that it’s impossible because everything has a cultural context. The epistemology you choose, when doing research, depends on what you believe. And the epistemology dictates what methods you use, and how you report your results.

You can be
certain of
what you know.
You cannot be
objective about
what you know.

Let’s look at some examples.

Study 1 fits with the view (above, left) that “you can be certain of what you know.” I plan and conduct a quantitative study to measure the time it takes a series of users to complete two common tasks in a software package: upgrading to the latest version of the software, and activating the software. I make appointments with users. In my workplace, I give each user a scenario and a computer. I observe them and time them as they complete the tasks by using the software package. My hope is that statistical analysis will give me results that I can report, including the average time on task with error bars, as the graph (right) illustrates.

Study 2 fits with the view (above, right) that “you cannot be objective about what you know” because all research takes place within a context. To lessen the impact of conducting research, I contact users to ask if I can study their workplace. I observe each user for a day. My hope is to analyse the materials and interaction that I’ve observed in context—complete with typical interruptions, distractions, and stimuli. Since a new software version has just been released, my hope is that I’ll get to observe them as they upgrade. I’ll report any usability issues, interaction-design hurdles, and unmet needs that I observe.

The above are compilations of studies I conducted.

  • Study 1 revealed several misunderstandings and installation problems, including a user who abandoned the installation process because he believed it was complete. I was able to report the task success rate and have the install wizard fixed.
  • Study 2 revealed that users write numbers on paper and then re-enter them elsewhere, which had not been observed when users visited our site for usability testing. One user told me: “I never install  the latest version because the updates can be unstable,” and another said: “I only upgrade if there’s a fix for a feature I use” to avoid unexpected new defects. I was able to report the paper-based workaround and the users’ feelings about quality, for product managers to reflect in future requirements.

Clearly, there’s more than one way to conduct research, and not every method fits every team. That’s an idea that can be explored at length.

This has me wondering: which method fits what, when, where? Is there a relationship between a team’s development process and the approach to user research (epistemology) that it’s willing to embrace? …between its corporate usability maturity and the approach?

Those are two of the lines of inquiry in my research at Simon Fraser University.

If you liked this post, you may also like Are usability studies experiments?