User-experience trading cards

Series 3 of the user-experience trading cards debuted at the 2009 IA Summit this week. Five Sketches™ is included in this set:

Five Sketches™ is one of the trading cards

The trading cards are a growing set—each card lists one method or technique useful to our industry—and are provided as a perk wherever one of nForm‘s speakers presents. The whole set is useful to:

  • help provide consistent terms.
  • illustrate the range of services UX practitioners can offer.
  • help clients understand the options with a simple, brief explanation.
  • kick-start our ideas when we’re facing an unusual problem.
  • inspire our business-development efforts.
  • … and more.

Not at IA Summit? Look for nForm at the WebStrategy Summit, the annual CanUX conference, and elsewhere.

Epistemology of usability studies

Currently, I’m conducting research on usability analysis and on how Morae software might influence that. My research gaze is rather academic, in that I’m especially interested in the epistemology of usability analysis.

One of my self-imposed challenges is to make my research relevant to usability practitioners. I’m a practitioner and CUA myself, and I have little time for academic exercises because I work where the rubber hits the road. This blog post outlines what I’m up to.

At Simon Fraser University, I learned that epistemological approaches have different assumptions about what is knowable. On one side (below, left), it’s about numbers, rates, percentages, graphs, grids, tables, proving absolute truths. On the other side, (below, right) it’s about seeking objectivity while knowing that it’s impossible because everything has a cultural context. The epistemology you choose, when doing research, depends on what you believe. And the epistemology dictates what methods you use, and how you report your results.

You can be
certain of
what you know.
You cannot be
objective about
what you know.

Let’s look at some examples.

Study 1 fits with the view (above, left) that “you can be certain of what you know.” I plan and conduct a quantitative study to measure the time it takes a series of users to complete two common tasks in a software package: upgrading to the latest version of the software, and activating the software. I make appointments with users. In my workplace, I give each user a scenario and a computer. I observe them and time them as they complete the tasks by using the software package. My hope is that statistical analysis will give me results that I can report, including the average time on task with error bars, as the graph (right) illustrates.

Study 2 fits with the view (above, right) that “you cannot be objective about what you know” because all research takes place within a context. To lessen the impact of conducting research, I contact users to ask if I can study their workplace. I observe each user for a day. My hope is to analyse the materials and interaction that I’ve observed in context—complete with typical interruptions, distractions, and stimuli. Since a new software version has just been released, my hope is that I’ll get to observe them as they upgrade. I’ll report any usability issues, interaction-design hurdles, and unmet needs that I observe.

The above are compilations of studies I conducted.

  • Study 1 revealed several misunderstandings and installation problems, including a user who abandoned the installation process because he believed it was complete. I was able to report the task success rate and have the install wizard fixed.
  • Study 2 revealed that users write numbers on paper and then re-enter them elsewhere, which had not been observed when users visited our site for usability testing. One user told me: “I never install  the latest version because the updates can be unstable,” and another said: “I only upgrade if there’s a fix for a feature I use” to avoid unexpected new defects. I was able to report the paper-based workaround and the users’ feelings about quality, for product managers to reflect in future requirements.

Clearly, there’s more than one way to conduct research, and not every method fits every team. That’s an idea that can be explored at length.

This has me wondering: which method fits what, when, where? Is there a relationship between a team’s development process and the approach to user research (epistemology) that it’s willing to embrace? …between its corporate usability maturity and the approach?

Those are two of the lines of inquiry in my research at Simon Fraser University.

If you liked this post, you may also like Are usability studies experiments?

From napkin to Five Sketches™

In 2007, a flash of insight hit me, which led to the development of the Five Sketches™ method for small groups who need to design usable software. Looking back, it was an interesting journey.

The setting. I was working on a two-person usability team faced with six major software- and web products to support. We were empowered to do usability, but not design. At the time, the team was in the early stages of Nielsen’s Corporate Usability Maturity model. Design, it was declared, would be the responsibility of the developers, not the usability team. I was faced with this challenge:

How to get usable products
from software- and web developers
by using a method that is
both reliable and repeatable.

The first attempt. I introduced each development team to the usability basics: user personas, requirements, paper prototyping, heuristics, and standards. Some developers went for usability training. In hindsight, it’s easy to see that none of this could work without a formal design process in place.

The second attempt. I continued to read, to listen, and to ask others for ideas. The answer came as separate pieces, from different sources. For several months, I was fumbling in the metaphorical dark, having no idea that the answer was within reach. Then, after a Microsoft product launch on Thursday, 18 October, 2007, the light went on. While sitting on a bar stool, the event’s guest speaker, GK Vanpatter, mapped out an idea for me on a cocktail napkin:

  1. Design requires three steps.
  2. Not everyone is comfortable with each of those steps.
  3. You have to help them.
Some key design ideas, conveyed to me on a napkin sketch in a Vancouver bar.

The quadrants are the conative preferences or preferred problem-solving styles.

I recognised that I already had an answer to step 3, because I’d heard Bill Buxton speak at the 2007 UPA conference, four months earlier. I could help developers be comfortable designing by asking them to sketch.

It was more easily said than done. Everyone on that first team showed dedication and courage. We had help from a Vancouver-based process expert who skilfully debriefed each of us and then served us a summary of remaining problems to iron out. And, when we were done, we had the beginnings of an ideation-and-design method.

Since then, it’s been refined with additional teams of design participants, and it will be refined further—perhaps changed significantly to suit changing circumstances. But that’s the story of the first year.