User-experience trading cards

Series 3 of the user-experience trading cards debuted at the 2009 IA Summit this week. Five Sketches™ is included in this set:

Five Sketches™ is one of the trading cards

The trading cards are a growing set—each card lists one method or technique useful to our industry—and are provided as a perk wherever one of nForm‘s speakers presents. The whole set is useful to:

  • help provide consistent terms.
  • illustrate the range of services UX practitioners can offer.
  • help clients understand the options with a simple, brief explanation.
  • kick-start our ideas when we’re facing an unusual problem.
  • inspire our business-development efforts.
  • … and more.

Not at IA Summit? Look for nForm at the WebStrategy Summit, the annual CanUX conference, and elsewhere.

Epistemology of usability studies

Currently, I’m conducting research on usability analysis and on how Morae software might influence that. My research gaze is rather academic, in that I’m especially interested in the epistemology of usability analysis.

One of my self-imposed challenges is to make my research relevant to usability practitioners. I’m a practitioner and CUA myself, and I have little time for academic exercises because I work where the rubber hits the road. This blog post outlines what I’m up to.

At Simon Fraser University, I learned that epistemological approaches have different assumptions about what is knowable. On one side (below, left), it’s about numbers, rates, percentages, graphs, grids, tables, proving absolute truths. On the other side, (below, right) it’s about seeking objectivity while knowing that it’s impossible because everything has a cultural context. The epistemology you choose, when doing research, depends on what you believe. And the epistemology dictates what methods you use, and how you report your results.

You can be
certain of
what you know.
You cannot be
objective about
what you know.

Let’s look at some examples.

Study 1 fits with the view (above, left) that “you can be certain of what you know.” I plan and conduct a quantitative study to measure the time it takes a series of users to complete two common tasks in a software package: upgrading to the latest version of the software, and activating the software. I make appointments with users. In my workplace, I give each user a scenario and a computer. I observe them and time them as they complete the tasks by using the software package. My hope is that statistical analysis will give me results that I can report, including the average time on task with error bars, as the graph (right) illustrates.

Study 2 fits with the view (above, right) that “you cannot be objective about what you know” because all research takes place within a context. To lessen the impact of conducting research, I contact users to ask if I can study their workplace. I observe each user for a day. My hope is to analyse the materials and interaction that I’ve observed in context—complete with typical interruptions, distractions, and stimuli. Since a new software version has just been released, my hope is that I’ll get to observe them as they upgrade. I’ll report any usability issues, interaction-design hurdles, and unmet needs that I observe.

The above are compilations of studies I conducted.

  • Study 1 revealed several misunderstandings and installation problems, including a user who abandoned the installation process because he believed it was complete. I was able to report the task success rate and have the install wizard fixed.
  • Study 2 revealed that users write numbers on paper and then re-enter them elsewhere, which had not been observed when users visited our site for usability testing. One user told me: “I never install  the latest version because the updates can be unstable,” and another said: “I only upgrade if there’s a fix for a feature I use” to avoid unexpected new defects. I was able to report the paper-based workaround and the users’ feelings about quality, for product managers to reflect in future requirements.

Clearly, there’s more than one way to conduct research, and not every method fits every team. That’s an idea that can be explored at length.

This has me wondering: which method fits what, when, where? Is there a relationship between a team’s development process and the approach to user research (epistemology) that it’s willing to embrace? …between its corporate usability maturity and the approach?

Those are two of the lines of inquiry in my research at Simon Fraser University.

If you liked this post, you may also like Are usability studies experiments?

Put the card in the slot

We know that human brains use patterns (or schemata) to figure out the world and decide what to do. This kind of cognitive activity takes place very quickly, which means we can react quickly to the world around us, as long as the pattern holds.

Here’s a pattern (or schema) that your brain may know: to put a card in a slot, use the narrow edge. If the card has a clipped corner, use that edge. Some examples:
clipped-cards-with-affordance
This pattern is easy for your brain because of the physical cues—also known as affordance. The narrow edge + clipped corner say: “This side goes in.”

A pattern that’s harder for your brain to learn is the magnetic-stripe bank card. You have to think about which way the stripe goes. And it seems harder when you’re in a hurry, or when you feel less safe. Have you used an outdoor bank machine at night, with people hanging around?

Vancouver transit tickets are awful because they don’t follow the card-in-the-slot pattern correctly. As a passenger on the SkyTrain (overhead subway line), you must punch your ticket at the station entrance. The machines are placed so you must turn your back on the drug dealers and their customers who hang out in the subway. As with bank cards, these transit tickets fit four ways, but only one will punch the ticket, and it’s not the edge with the clipped corner. There’s a yellow arrow to assist, but while the arrow is clearly visible in daylight, it’s almost invisible in the yellow-tinged fluorescent lighting of most SkyTrain stations. Also, the yellow arrow must be face down, which is counterintuitive because then you cannot see it. In the photo (above, right), can you find the arrow on the ticket?

Did the designers consider how their ticket machines or bank cards would make passengers feelDesigning for emotion seems to have legs, these days, but it’s not a new idea.

Doing better. I wonder whether the designers of these systems (transit tickets, bank cards) considered all possible options. It’s a Five Sketches™ mantra: You get a better design when you first saturate the design space. This can include doing a competitor analysis to seek out other ideas. And there are other models for transit tickets. I’ve seen Paris subway tickets with the magnetic stripe in the middle, so passengers could insert the ticket face up or face down, frontwards or backwards. The more recent tag-on/tag-off technology used from London, UK, to Perth, WA, avoids the insert-your-card problem—though the overall experience may be worse, since failure to tag off means paying the highest possible fare.

As for bank cards, IBM’s designers must have modelled bank cards on credit cards, which had the magnetic stripe toward the top instead of in the middle. This doubles the customer’s chances of inserting the card incorrectly. An obvious question to have asked at the design stage: can we design a bank machine to read the card regardless of how it’s inserted?

What “standard” GUI means

If you decide to forego the design stage and reuse or copy an existing GUI or interaction design, your life is easier.

But how do you know when you have a standard to follow?

An obvious place to find standards is in the precedents set by your own Development team. If your standards are not documented, consider a BarnRaising afternoon, once a month, until the high priority standards are documented.

And while many of us have enjoyed the comfort of a style guide to follow, there’s more flexibility now, as Milan Guenther wrote in Photos for interaction:

© Boxes and ArrowsThe barrier between web pages and desktop software is beginning to disappear, and modern rich client user interface technologies such as Silverlight/WPF, Air, or Java FX enables designers to take the control over the whole user experience of a software product. Style guides for operating systems like MacOS or Windows become less important because software products are available on multiple platforms, incorporating the same custom design independently from OS-specific style guides. Software companies and other parties involved begin to use the power of a distinct visual design to express both their brand identity and custom interactive design solutions to the users.

But not everything needs to be reinvented. There are plenty of common user tasks that the operating system supports with default GUI, including Open, Save, Print, Search/Replace, Font, Color, and Page Setup tasks. These also defaults serve as patterns for the custom GUI that you design—the visible objects, and the invisible ones, such as:

  • mental models
  • workflows

The two traditional operating systems give us plenty of standard controls to meet any design challenge, including buttons, boxes, sliders, progress indicators, notifications, and much more. If you develop for MacOS, Apple publishes various guidelines, including for user experience. If you develop for Windows, the Vista UX Guide is updated regularly, with new sections for ribbons, touch, pen, and printing.

 sample-controls

Despite everything I just said about following standards, I’m also a big believer in changing things that are broken. An example: on my desk I can have two sheets of paper, copies of the same document. On my desk, they don’t need to have different names—even if each sheet has different comments written on it by different people. On my computer, the same two documents must either have different names or be in different folders. Surely the computer can keep track of both, uniquely, without forcing the user to choose different names? The unique-name requirement is an example of architecture dictating the interface. Alan Cooper wrote about this and other examples of tail wagging the dog in About Face: The essentials of user-interface design.

If you’re wondering when to copy/reuse, and when to design, check out my blog post, GUI: copy it or design it?

Generative design vs. Five Sketches™

© Leah Buley, from her presentation on SlideShareLeah Buley talked about generative design at the South by Southwest Interactive conference, today. Buley feels design methods are lacking in the set of professional tools we use for software development: “We don’t have so many good, reliable, repeatable design techniques.” I agree with her.

Buley tells how, in her first design session at Adaptive Path, she was handed a pen and paper, and told to sketch. The point was “to crank out a lot of ideas in a short time.” That’s what generative design is all about: saturating the design space with ideas. Design programs teach generative design, but there’s no generative design taught in programs for developers, QA staff, technical communicators, product management, or marketing.

There are already several design methods for software development teams to choose from, including Buley’s wonderful grab bag. Others I know of are Five Sketches™ (obviously) and Design Studio, both of which focus on the complete process of producing a design, start to finish, with all its challenges. Microsoft’s Bill Buxton told the UPA 2007 conference that he insists on generative design, and I’d love to see that in action. Each method differs slightly, but they all work because….

Why do they all work? Because of generative design. Generative design addresses what Buley calls her “dirty secret.” She freely confesses, about her design work before Adaptive Path: “I had very little confidence that what I was presenting as the design was in fact the one, optimal solution to the problem.” My experience teaching Five Sketches™ tells me that once you’ve participated in a generative-design process, you’ll know that you can have that confidence.

Buley’s great presentation (slides with audio) is here. Have a look:

A tangential question: when will computer science programs teach Basic Design Methods?

GUI: copy it or design it?

I’m a big believer in following the standards for GUI and interaction design. But when do you copy or reuse an existing design, and when do you design something new? Here’s my guideline for when to design and when to reuse or copy the GUI and interaction:

Reuse When… Design
…there is an external standard.
For example: the Vista UX Guide recommends […].
 
…there is a GUI or interaction precedent in your software.
For example: we distinguish between Import and Open.
   …the precedent uses an out-of-date interaction style.
For example: the users cannot drag an object to move it.
  …the precedent uses an incorrect mental model.
For example: user preferences are saved in a text file.
  …usability tests say the precedent reduces performance.
For example: 70% of occasional users do this wrong.
 …the developer wants to re-use existing GUI.
For example: use a variable to change the dialog-box title.
 …competitors have implemented the feature better.
For example: when they zoom in, it’s smooth, not stuttered.
 …the market has a certain GUI expectations.
For example: iPhone promoted gestures, others had to copy.
  …the product requires a certain strategic direction.
For example: all data must be sharable, locally and remotely.
  …there aren’t sufficient resources to design.
For example: “There’s no time in the schedule for design.”

Reuse your existing design ordesign it using Five Sketches™.
Copy the competitor’s design orleapfrog their design.
  If you ship a feature with poor interaction or poor usability, where’s the user value and where’s your credibility? Not all design processes are lengthy. Five Sketches™ takes about a day for most features that you’re tempted not to design.

Here’s an example: did Internet Explorer leapfrog Firefox?

Standard OK-Cancel button order

I have two stories about command buttons.

Quite a few years ago, a team member walked me through a new dialog box. He entered some data, and then unintentionally clicked the Cancel button. He made this error twice in a row, thus losing his changes twice in a row. I pointed out that the OK and Cancel buttons were in the wrong order. The developer switched the buttons to the Windows-standard layout (below, right), and the user-performance problem was solved.

A few years later, on a different project, not only were the buttons in non-standard order, they used non-standard wording and they used coloured icons. My request to follow the Windows standard was met only half-way and then sent for Beta testing before I saw it again. The buttons were now in the correct order, but the button names were changed, and the names and icons were still non-standard. Beta testers loudly protested the change. (Beta testers are often expert users, and experts abhor any change that slows them down.) At the time, the company was only a few steps up the Neilsen Corporate Usability Maturity model, so instead of completing the change to Windows-standard OK and Cancel buttons, the buttons were rolled back, to appease the protesting Beta users. I found out too late to retest with Windows-standard buttons, so there was no data to convince the developers. For me, it was an opportunity to learn from failure. 🙂

Why is non-standard so hard?

Try this Stroop test (right). Ignore the words. Instead, identify the colours, out loud. No doubt, the second panel went slower and took more effort.

Try the variation, at left. Find the first occurrence of the word Blue. Next, find the first occurrence of the colour .

Just as mismatches between text and colour slow your Stroop-test performance, mismatches between standard and non-standard OK and Cancel buttons slow user performance. Our Beta users clicked the wrong buttons—a huge waste of their time—because the new solution didn’t follow any standard. The Beta testers were right to protest, but wrong in their demand to revert to the original non-standard state. (See: Customers can’t do your job.)

Users learn GUI patterns—patterns that are widely reinforced by user experience—and users expect GUI to behave predictably, so it’s unwise to deviate radically from the standards, unless there are product-management reasons to do so.

I’ll write more about following standards versus designing something new in the coming few posts.

P.S. It looks like Jakob Nielsen got here before me.

Users are not used to it

For several years, I did usability testing on CAD-style software that was full of legacy code, some of which preceded Windows 98.

Some of that legacy code dealt with CAD objects that displayed on screen. To work with these objects, users had a choice of menu commands and toolbar buttons, supplemented by dialog boxes. For example, to move an object, users could not simply click and drag it; they would choose a command, click the object, and then, in a dialog box, enter the distance to move the object.

That’s the way CAD programs worked when that legacy code was originally written.

Over the years, during my usability testing of various features, I noticed a growing trend toward direct manipulation. That is, to work with an object, users would try to click it or drag it. They would do this without thinking. Even long-time users, faced with a new feature (studies from 2005-2006), would try direct manipulation first:

  • 100% of the test subjects clicked a cube, trying to select it.
  • 100% of the test subjects dragged a point or line, trying to move it.
  • 100% of the test subjects clicked in the window, trying to create a point.
  • 100% dragged across points, trying to select them.

But the new features were built on the legacy code, so had the command-driven interaction style. A simple click on the object was usually a dead end for users.

And the users would say: “Darn,” and then look for another vector—another pathway—to complete the task.

The reasons we didn’t provide direct manipulation:

  • “Our users are used to the way it is now.” Clearly, usability-test results negated that argument. Users are not so accustomed to old-style interaction, because their first instinct for new CAD tasks in an existing product was direct manipulation.
  • “There’s not enough bang for the buck” because the opportunity cost (the cost of skipping other possible projects) was deemed too great. It’s hard to argue with this, as a usability analyst. The company opted for more features, and may have increased its risk of being leapfrogged by the competition, as discussed in an earlier blog.

Your usability advantage

When businesses buy software, rather than choose the software with the lowest purchase price, they ought to consider the total cost of ownership—including the added productivity and enjoyment that usability and user-experience provide.

Every software company will say “our product is usable,” so how can you prove to prospective customer that you’ve really got usability?

Your product has a usability advantage if:

  1. Your development team’s motivation is right. Software meets customer business needs if came out of a design and development process that considers stakeholders beyond the development team.
  2. Incidentally, getting the motivation right is what Five Sketches™ was designed to help development teams to do.

  3. Trials quickly reveal product effectiveness. In a hands-on trial, you want users to try common tasks, figure them out, and say they liked the experience. A good hands-on trial reduces a competitor’s vendor demo to an infomercial.
  4. It’s about information more than data. Data requires cognitive transformation in the user’s head to become information. Information is ready now to support insight and appropriate action.
  5. Change management is minimal. Your mental model is clearly evident and the user experience is pleasant, so resistance to change is lower. Employees will see evidence of leadership rather than another “solution” imposed on them.
  6. Your training teaches skills. Pick one: training that leads users through a maze (an unusable-interface), or training that teaches users smarter ways to work toward their goals.
  7. You have metrics. If you tell customers how long it will take new users to start performing, you show your respect for their total cost of ownership.
  8. You have references. A product reference is as close as a Google search. In a web-2.0 world, your best “reference” could be an engaged, loyal user community.

The first 4½ to 5 points, above, require the Development team’s involvement, and the last few benefit from Dev involvement. Clearly, a usability advantage requires the involvement of other departments, directed by a product manager who works the Marketing, Sales, Support, and Development teams in concert. 🙂

This post was inspired by a Howard Hambrose article in Baseline Magazine, which recommends that IT professionals question software usability before they buy and implement.

Heuristics at the design stage

On the IxDA’s discussion list for interaction designers, Liam Greig posted his “human friendly” version of a heuristics checklist based on Nielson’s originals and the ISO’s ergonomics of human-system interactions.

Here are  just the headings and the human-friendly questions, which are useful at a project’s design stage.

A design should be…

  • transparent. Ask: Where am I? What are my options?
  • responsive. Ask: What is happening right now? Am I getting what I need?
  • considerate. Ask: Does this make sense to me?
  • supportive. Ask: Can I focus on my task? Do I feel frustrated?
  • consistent. Ask: Are my expectations accurate?
  • forgiving. Ask: Are mistakes easy to fix? Does the technology blame me for errors?
  • guiding. Ask: Do I know where to go for help?
  • accommodating. Ask: Am I in control? Am I afraid to make mistakes?
  • flexible. Ask: Can I customize my experience?
  • intelligent. Ask: Does the technology know who I am? Did the technology remember the way I left things?

Sometimes, at the analysis end of a Five Sketches™ ideation-design session, the design participants see more than one path forward. Use heuristics to frame the discussion of how each path rates, to reach a decision faster.

The heuristics can be about more than just usability. You can also assess coding costs, maintenance costs, code stability…. And, of course, you must also assess potential designs against the project’s requirements.