Predict your “usable release” date by integrating user research

Questions that stakeholders, project managers, and product owners have in common:

  • When will the product be finished?
  • When will a usable product be released?

Both questions could be answered by using the same method: a burn-down chart. But the second question requires adding certain user research findings to the chart.

Continue reading “Predict your “usable release” date by integrating user research”

User research to adjust to regional needs

When communicating with audiences around the world, the text in your software or service user interface is important. You’ll need to translate.

Translation is just one step. Delivering software in multiple languages takes more than that. Products and services need to be adjusted to regional expectations.

Even things you may assume are universal concepts—such as names or money—may need to be adapted. User research can uncover those regional differences. Here’s a quick look at:

  • names, specifically middle initials
  • money, in the form of bank loans
Continue reading “User research to adjust to regional needs”

Reduce spam without hindering usability

If your website lets visitors sign up, join in, or add comments and reviews, then—in addition to the legitimate details you want people to contribute—you’re getting some garbage. Some of this garbage is sent by spam bots.

Spam bots post content that detracts from your website. Spam lowers your site’s perceived quality. Spam posts may include links that pull traffic to competing sites or trick your visitors into a scam. The cost of spam is hard to quantify.

Plenty of experts recommend methods to avoid spam. But in a series of user research studies, I observed that anti-spam measures impose a cost of their own. They can add friction that causes visitor abandonment and attrition. The cost of this is easier to quantify.

Some anti-spam measures impose more pain than others. I decided to assess and compare them.

Continue reading “Reduce spam without hindering usability”

Up and down the TV channels

My television lets me step through the channels. To do this, I use the remote control’s CH button. Similarly, my television lets me page through the list of programs, five channels at a time. To do this, I use the remote control’s PG button. In fact, it’s one button for the stepping and paging functions.

My remote control

The programs in the list are shown in numeric order, so smaller numbers are higher in the list. Pressing “+” will page the list up, so “+” leads to smaller numbers. Similarly, pressing “–” will page the list down, to larger numbers. This follows the same mental model as scrolling in a computer window, including the one you’re reading in, now.

Scrolling up

In contrast, when I’m watching one channel (full-screen, so with the program guide hidden), the same two buttons have the inverse effect. The “+” button increases the number of the channel (which is like moving down in the programs list, not up). This follows the same mental model as a spin control in many computer programs.

Spinning up

Imagine using the one button in succession for the two functions:

first as PG to page through the menu
  and then, after selecting a channel,
as CH to step through the channels.

I see in this an excellent problem for a practicum student or as a class assignment that’s combining user research, design, GUI, and handheld devices. Possible questions:

  • What research would confirm that this is, in fact, a problem?
  • If you confirm the problem, is it entirely on the hardware side? How many people are affected?
  • Is there a business case to fix the problem?
  • How could you fix it? What design methods and processes would you use? Why?
  • How could you demonstrate that your design fixes the problem? Is there a lower-cost way to validate the design, and, if so, what are the trade-offs?

Testing in the UX-design process

Three weeks ago, a client called me. They had just completed release 1.0 of a new Web application that will replace their current flagship product. The client was asking about summative usability testing to evaluate how well the product performs in the hands of users, because they want their customers to succeed.

Since the product is an enterprise-wide product that requires training, one thing the client specifically asked about was whether the Help is a help to users.

Do you need help?A quick heuristic review I did turned up no obvious problems in the Help, so we decided on user observation with scenarios. In a preparatory dry run done a few weeks ago, I supplied a participant with a few scenarios and some sample data. The participant I observed was unable to start two of the scenarios, and completed the third scenario incorrectly by adding data to the wrong database.

The Help didn’t help her. The participant was able to find the right Help topic, but she completely misinterpreted the first step in the Help’s instructions.

The team had not anticipated the apparent problem that turned up during the dry run. Assuming it is a real problem—and this can’t be more than an assumption given the sample size of 1—this story nicely illustrates the benefit of summative testing, as you’ll see below.

Best practices working together

The team, including a product manager, several developers, a technical communicator as Help author, and me as a contract usability analyst, used these best practices:

  • The Help author used a single-sourcing method. The most common GUI names, phrases, and sentences, are re-used, inserted into many topics from one source, like a variable. In almost every Help topic, the problematic first step was one such re-usable snippet.
  • The product manager assesses the bugs based on severity and cost, ensuring the low-hanging fruit and most serious of defects get priority.
  • In a heuristic review of the Help, I (wearing a usability-analyst hat) did not predict that the first step in most topics would be misinterpreted. Heuristic reviews, when conducted by a lone analyst, typically won’t predict all usability problems.
  • The developers use an Agile method. At this stage of their development cycle, they build a new version of the product every Friday, and, after testing, publish it the following Friday.

After the dry run uncovered the apparent problem, the product manager said: “Let’s fix it.” Since the Help author used re-usable snippets, rewording in one place quickly fixed the problem throughout the Help. And the company’s Agile software development method meant the correction has already been published.

Was this the right thing to do? Should an error found by one participant during a dry run of upcoming usability tests result in a change? The team’s best practices certainly made this change so inexpensive as to be irresistible. With the first corporate customer already migrated to the new product, my client has a lot riding on this. I can’t be certain this rewritten sentence has improved the Help, but—along with the other bugs they’ve fixed—I know it increases my client’s confidence and pride in their product’s quality.

It’ll be interesting to see what the upcoming user observations turn up.

Reminding myself of things I already know

The actual user-observation sessions are still ten days away, but the dry run reminded me of things I already know:

  • Despite each professional’s best efforts, there will always be unanticipated outcomes where users are involved. Users have a demonstrated ability to be more “creative” than designers, developers, and content authors, simply by misinterpreting and making unintended use/misuse of our work.
  • The best practices in each discipline can dovetail and work together to allow rapid iteration of the product by the team as a whole. A faster response means fewer users will be affected and the cost of support—and of the rapid iteration—will be lower. A good development process adjusts practices across teams (product management, research, development, user experience, design, tech-comm, quality assurance) so the practices dovetail rather than conflict.
  • Summative testing helps validate and identify what needs to be iterated. Testing earlier and more often means that fewer or perhaps no users will be affected. Testing earlier and more often is a great way to involve users, a requirement for user-centred design, or UCD. It also changes the role of testing from summative to formative, as it shapes the design of the product before release, rather than after.

Informing what you design and build

I was recently invited to join a design-specification review for a feature I’ll call Feature X.

As I listened to the presentation, I thought: “There are pieces missing from this spec.” When the time came for questions, I asked about the project’s scope. “Your spec is titled Feature X, but I see very little X described in this document. What does X mean to you?” Sure enough, there was a gap between the title of the design specification and the content of the design specification. And the gap was deliberate, on the part of the Development team.

What we're building

The company in question makes software, not cars or bicycles, but the gap between the spec’s title page and the spec’s content was just as great as the one in the illustration. The company’s potential customers say they want Feature X. The Development team says they only have the resources to build Feature Non-X. Non-X is missing some of the key features that define the X experience.

Except for its sleight-of-hand usefulness to sales staff, Feature Non-X may be a non-starter. But there’s one more thing to tell you:

  • Customers say they want Feature X, but the vast majority of users who already have Feature X, don’t use it.

Apparently—I say “apparently” because the evidence is anecdotal—one reason customers who have a competitor’s X don’t use it is because X is complicated to set up and complicated to use. This is, of course, a golden opportunity to make a simple, usable feature that provides only what customers will use.

If this small company is lucky, their Feature Non-X will sell well and the company will leap-frog their Feature-X competitor. With a little marketing- or ethnographic research, the company would have some certainty about why Feature X is requested but not used—and the team would have information to help them design Non-X. Unfortunately, a lack of resources may leave the team’s designer and developers guessing, and the company will have to take this uncertainty in stride.

How many user personas?

If you’re creating user personas, How-To articles often tell you that you only need two or three personas at most. That’s fine for most web-design projects. However, if you are working on an enterprise-wide system that has modules for different types of professionals who each perform distinct and substantial tasks, then you will have a larger number of user profiles.

How big is the feature set? Imagine a product suite the size of Microsoft® Office that actually consists of very different pieces: Excel, Word, Access, PowerPoint, and more. Usually, the only persona who is involved with every module of a suite is someone like Ivan the IT administrator, whose tasks are very different from most users.

That may sound obvious, but I really struggled for a while with the notion that I was “doing it wrong” because I couldn’t squeeze the user roles and needs into a mere three user personas—or five, or seven, for that matter. When you have a dozen user personas, it’s challenging to keep them all apart, but most teams only need a few at a time. Likely, the only people who need to know all the user personas are on the user-experience and product-management teams. And the alternative—to have a catch-all user persona whose role is to “use the software”—is of no help at all.

If, by chance, you find you need many user personas, then beware once the projects wrap up and the teams turn to their next projects. They may have incorrectly learned to begin a project by creating user personas (“…well, that’s what we did last time…!”) instead of re-using and tweaking the existing user personas. [Not everyone agrees with re-use; see the comments.]

User personas will influence your product design and affect how people throughout Development and Marketing think, strategically and tactically, about their work. So you need to get the user personas right. Getting a series of user personas reviewed—not rubber-stamped but mindfully critiqued—is a challenge; nobody ever makes time to do it well. Here’s my solution: after you research users and then write your draft user personas, review them together with some subject-matter experts, Marketing staff, and developers, in a barn-raising exercise. Pack them all into one afternoon for the initial review. Then meet the first Friday of the month until you have agreement about each user persona. During one such meeting, one of my subject-matter experts said to another: “Oh, this user persona is just like [a customer named H—]!” The user persona was so on-target that that it reminded her of someone I had never met or researched. That was a nice way to learn that I got it right.

For detailed How-To advice on developing user personas, try these readings:

This sugar packet is a movie

Whether it’s ethnographic research, usability research, or marketing research, I’ve learned that the best insights aren’t always gleaned from scheduled research.

Here’s a photo of impromptu research, conducted by Betsy Weber, TechSmith’s product evangelist. I was her research subject. Betsy recorded me pushing sugar packets around a table as I explained how I’d like Camtasia to behave.

Jerome demos an idea to Betsy. Photo by Mastermaq

Betsy takes information like this from the field back to the Camtasia team. There’s no guarantee that my idea will influence future product development, but what this photo shows is that TechSmith listens to its users and customers.

The ongoing stream of research and information that Betsy provides ensures better design of products that will be relevant and satisfying for TechSmith customers down the line.

Cognitive psych in poll design

The WordPress community recently ran a poll. Users were asked to choose one of 11 visual designs. The leading design got only 18% of the vote, which gives rise to such questions as:

  • Is this a meaningful win? The leader only barely beat the next three designs, and 82% voted for other designs.

WordPress pollI don’t know about the 18% versus 82%. I do wonder whether some of the entries triggered a cognitive process in voters that caused them to pay less attention to the other designs, which may bring the leading design’s razor-thin lead into question. This cognitive process—known as the “ugly option”—is used successfully by designers as they deliberately apply cognitive psychology to entice users to act. I’ll explain why, below, but I first want to explain my motivation for this blog post.

I’m using this WordPress poll as a jumping-off point to discuss the difficulty of survey design. I’m not commenting on the merit of the designs. (I never saw the designs up close.) And I’m certainly not claiming that people involved in the poll used cognitive psych to affect the poll’s outcome. Instead, in this blog, I’m discussing what I know about cognitive psychology as it applies to the design of surveys such as this recent WordPress.org poll.

Survey design affects user responses

If you’ve heard of the controversial Florida butterfly ballot in the USA’s presidential election in 2000, then you know ballot design—survey design—can affect the outcome. I live outside the USA, but as a certified usability analyst I regularly come across this topic in industry publications; since that infamous election, usability analysts in the USA have been promoting more research and usability testing to ensure good ballot design. I imagine that the Florida butterfly ballot would have tested poorly in a formative usability study.

The recent WordPress poll, however, would likely have tested well in a usability study to determine whether WordPress users could successfully vote for their choice. The question I have is whether the entries themselves caused a cognitive bias in favour of some entries at the expense of others.

It seems that one entry was entered multiple times, as dark, medium, and light variations. This seems like a good idea: “Let’s ask voters which one is better.” Interestingly, the visual repetition—the similar images—may have an unintended effect if you add other designs into the mix. Cognitive science tells us people are more likely to select one of the similar ones. Consider this illustration:

More people choose the leftmost image. The brain’s tendency to look for patterns keeps it more interested in the two similar images. The brain’s tendency to avoid the “ugly option” means it’ll prefer the more beautiful one of the two. Research shows that symmetry correlates with beauty across cultures, so I manipulated the centre image in Photoshop to make it asymmetrical, or “uglier”.

The ugly-option rule applies to a choice between different bundles of goods (like magazine subscriptions with different perks), different prices (like the bottles on a restaurant wine list), and different appearances (like the photos, above). It may have applied to the design images in the WordPress poll. The poll results published by WordPress.org lists the intentional variations in the table of results:

  • DR1: Fluency style, dark
  • DR2: Fluency style, medium
  • DR3: Fluency style, light

The variants scored 1st, 4th, and 6thIn addition to these three, which placed 1st, 4th, and 6th overall, it’s possible there were other sets of variations, because other entries may have resembled each other, too.

As a usability analyst and user researcher, I find this fascinating. Does the ugly-option rule hold true when there are 11 options? Was the dark-medium-light variation sufficient to qualify one of the three as ugly? Did the leading design win because it was part of a set that included an ugly option? And, among the 11 entries, how many sets were there?

There are ways to test this.

Test whether the poll results differ in teh absence of an ugly-option set. A|B testing is useful for this. It involves giving half the users poll A with only one of the dark-medium-light variants, and the other half poll B with all three variants included. You can then can compare the two result sets. If there is a significant difference, then some further combinations can be tested to see if other possible explanations can be ruled out.

For more about the ugly option and other ways to make your designs persuasive, I recommend watching Kath Straub and Spencer Gerrol in the HFI webcast, The Science of Persuasive Design: Convincing is converting, with video and slides. There’s also an audio-only podcast and an accompanying white paper.

Low-fi sketching increases user input

Here are three techniques for eliciting more feedback on your designs:

  • show users some alternatives, so more than one design.
  • show users a low-fidelity rather than high-fidelity rendering.
  • ask users to sketch their feedback.

To iterate and improve the design, you need honest feedback.  Let’s look at how and why each of these techniques might work.

Showing alternative designs signals that the design process isn’t finished. If you engage in generative design, you’ll have several designs to show to users. Users are apparently reluctant to critique a completed design, so a clear signal that the process is not yet finished encourages users to voice their views, but only somewhat.

Using a low-fidelity rendering elicits more feedback than the same design in a high-fidelity rendering. Again, users are apparently reluctant to critique something that looks finished—as a high-fidelity rendering does.

hi-fi_vs_low-fi_sketching

The design is the same, but it feels more difficult to criticise the one on the right.

Asking users to sketch their feedback turns out to be the single most important factor in eliciting feedback. It’s not known why, because there hasn’t been sufficient published research, but I hypothesize that it’s because this is the most indirect form of criticism.

Where’s the evidence for sketched feedback?

The evidence is unpublished and anecdotal. The problem with unpublished data is that you must be in the right place at the right time to get it, as I was during the UPA 2007 annual conference when Bill Buxton asked the room for a show of hands. Out of about 1000 attendees, several dozen said they had received more and better design-related feedback by asking users to sketch than by eliciting verbal feedback.

When you ask a user: “Tell me how to make this better,” they shrug. When you hand them a pen and paper and ask: “Sketch for me how to make this better,” users start sketching. They suddenly have lots of ideas.

My own experience agrees with this. In Perth, Australia, I took sketches from a Five Sketches™ design session to a customer site for feedback. I also brought blank paper and pens, and asked for sketches of better ideas.

Not surprisingly, the best approach is to combine all three techniques: show users several low-fidelity designs, and then ask them to sketch ways to make the designs better.