When I conduct usability studies, I use a laptop and Morae to create a portable and low-cost usability lab. I typically visit the participant on site, so I can look around. I provide a scenario that participants can follow as they try using a new software feature. Morae records their voices, facial expressions, on-screen actions, clicks, and typing. The raw data gets saved in a searchable and graphable format. Afterward, I review the recorded data (I rarely have written notes) and make evidence-based recommendations to the feature’s project team.
For example, in one usability study I realised that the order of the steps was causing confusion and doubt. The three-step wizard would disappear after step, so users could modify their data on the screen. Users thought the task was completed with step 2, so the reappearance of the wizard for step 3 caused confusion. I recommended we change the order of the steps:
Changing the order fixed the “confused users” problem. But was this scientific? Aside from the fact that I’m researching human subjects rather than test tubes, am I following a scientific model? At first glance, the usability test I conducted doesn’t seem to follow the positivist model of what constitutes an experiment:
For example, unlike the test-tube lab experiment of a chemist, my usability test had no control group. Also, my report went beyond factual conclusions to actual recommendations (based on my expert opinion) for the next iteration.
I can argue this both ways.
On the one hand, my usability test does have a control group if I take the next iteration of the product and repeat the usability test with additional participants, to see whether my recommendations solved the problem. I could compare the task-completion rates and the task duration.
On the other hand, if I were asked to determine whether the product has an “efficient” workflow or a “great” user experience—which are subjective measures—I’d say a positivist-model experiment is inappropriate. To measure a user’s confusion or satisfaction, I might consider their facial expressions, verbal utterances, and self-reported ratings. This calls for a research design whose epistemology is rooted in post-positivist, ethnomethodological, situated, standpoint, or critical approaches, and has more in common with research done by an ethnographer than by a chemist.
If you liked this, you may also like Epistemology of usability studies.
Jerome,
Great post. This is a really interesting question when you begin to break a usability study down and compare it to the three pillars of scientific procedure: hypothesis, experimentation, and proof.
When you consider how a usability study usually begins, with some sort of hunch that an aspect of the design isn’t quite performing up to par, that mimics a hypothesis by definition pretty well. Copernicus proposed the hypothesis that the Earth and planets revolve around the sun after he made certain celestial observations that didn’t quite support the notion that the Earth stood at the center of the universe. That was his “hunch” or hypothesis.
Then you move on to the second pillar, “Experiment”, which through good usability testing practices certainly can quality in my opinion as a real experiment.
Finally, I do agree with you on the Conclusions part. With usability testing recommendations you do go a bit beyond the formal conclusions, building upon what your conclusions gave you, and using your expert opinion to decide actionable next steps the client can take.
Hi Jerome,
Thanks for writing about these questions. They are ignored all too often, yet they are so critical to a robust research praxis, so thank you.
I would argue, respectfully, that your approach cannot be ethnomethodological or even interpretivist in either case.
You would be using “subjective” measures such as facial expressions, for example, but you would not be engaging in the interpretive practice so central to qualitative research. Your research design would initially have to be far less structured, for example (e.g., not your laptop, but THEIR computer, sitting somewhere in their house or office). The participant would tell you about the topic, because you would guide them, but they may or may not find it relevant to show you the computer. They certainly would not complete (or attempt to complete) a pre-determined flow — that is engineered entirely by your research desire to uncover its effectiveness.
In this sense, I would argue usability studies are necessarily positivist but they may be less “scientific” than say, lab-based experiments. But your guiding of the “user,” your pre-determined desire to find out a specific insight about the flow itself, and your bringing of the laptop, e.g., “an instrument” makes this far more a mirror of positivist research design.
Interpretive research, would argue, necessarily puts more power in the hands of the “participants” and therefore really can’t get to the answers you really need to get to in usability studies.
Thanks again for writing such important work!