Predict your “usable release” date by integrating user research

Questions that stakeholders, project managers, and product owners have in common:

  • When will the product be finished?
  • When will a usable product be released?

Both questions could be answered by using the same method: a burn-down chart. But the second question requires adding certain user research findings to the chart.

Continue reading “Predict your “usable release” date by integrating user research”

Ten ways to improve the usability of products that Agile teams build

Software development that uses a waterfall method is likely to deliver the wrong thing, too late. The intent of the Agile method is to deliver working software sooner, so the intended users—our clients and their customers—can provide feedback that steers us to deliver the right thing.

There’s a tension between delivering on time and delivering the right thing. In fact, the rush for on-time delivery can result in the wrong thing—an unusable product. There are ways to prevent this. User research can help. Continue reading “Ten ways to improve the usability of products that Agile teams build”

Poor usability is a form of tech debt

If you’ve worked in software development for a while, you may have noticed that work on usability gets postponed more often than work on new features and functions. You could see this as a form of tech debt. It accumulates with every release.

Bike-tire pumpWhat contributes to this accumulation?

  • Timing. Some usability issues aren’t identified until Alpha testing with customers begins, or until after the product is released.
  • Competition. There’s often pressure to leap ahead or catch up with competitors by adding new features and functions.
  • Budgeting. If multiple teams compete for a share of the development budget, something shiny and new may attract more funding than boring old maintenance, upgrades, and tech debt.

It’s not an either/or proposition. With every release you can give your product a bit more usability. And you can do this at a low-to-moderate cost and low-to-moderate risk.

Continue reading “Poor usability is a form of tech debt”

Designing and influencing user performance

When designing the user experience of software, UX- and Development teams often focus on how the user interface supports user performance, because that’s within their locus of control. Once the product is in the wild, environmental factors may reduce user performance despite the team’s best product-design efforts. But I believe it’s possible for a UX team to also influence the environment in which their products get used. Consider two of these:

  • The user’s display size.
  • The soundscape.
Large displays < one salary

The environment affects user performanceUsers of all ages and genders are more effective at performing search tasks and comparison tasks (Tao Ni et al, 2006), and more effective at spatial tasks, when they use large displays. Mary Czerwinski et al, reported a 12% significant performance benefit (2003). However, when given a choice, people don’t want very large displays on their office desks; they opt for medium-sized displays instead. One study showed that older users least prefer large displays but stand to gain the most performance benefit. (This study was done before multi-monitor arrangements became common.)

A 12% improvement in performance suggests that 7 people with large displays could theoretically do the job of 8 people with medium displays. How many large displays could your office buy for one person’s salary every year? For business-to-business sales and especially for enterprise-wide software implementations, there’s a place for sales teams and proposal writers to mention the business case for larger displays.

Call it what you want—innovation, thinking outside the box, providing solutions—your UX-Design team can work with the Sales and Service/Implementation teams to ensure customers get solutions that include better hardware choices.

Speak less clearly, please

A half-decade of research by Dr Sabine Schlittmeier has expanded on what common sense told us: it’s harder to concentrate when others are chatting in the background. Schlittmeier found that when background speech is louder and more intelligible, it negatively affects verbal short-term memory, sustained attention, and verbal-logical reasoning. When I asked her what techniques have been shown successful, Schlittmeier told me that a masking sound, such as music or talk radio, is not objectively effective because the higher level of background sound has detrimental cognitive effects, but subjectively people feel this is effective. She added that there’s a measurable benefit to:

  • Shifting high-concentration work to times when fewer people are around.
  • Doing high-concentration work in single offices.

I suppose working remotely—from a quiet home—is a variation of these solutions.

I also asked, “What one thing, if handled differently, would most improve the way people experience noise at work?” Schlittmeier said it’s not about one thing. She recommended attacking problem sound from all dimensions at once: loudness, frequency characteristics, sound production, transmission, and so on.

The way I read the research results, reducing background speech to a soft, unintelligible noise could result in a 10% to 25% decrease in memory errors and logic errors, and an 18% increase in attention span. What Schlittmeier hasn’t provided is data about overall productivity improvement, without which it’s harder to make a business case for spending on office-noise abatement.

But there are other ways to mitigate the background office noise that affects your users, and you may be able to influence how your customers approach that problem.

A box that promotes wide screeens or headsetsAgain: call it what you want—innovation, thinking outside the box, providing solutions—your UX-Design team can work with the Marketing team to influence the environment through traditional marketing. Imagine a business-to-consumer product that is designed to work even better with a (noise-cancelling) headset—and which is depicted in use with headsets in the marketing messages and on the packaging.

Train yourself in frustration, confusion, and inefficiency

For professional reasons, I like to mess around with software. It’s a form of training, because some of the messing around leads to frustration, confusion, and inefficiency. And that’s good.

My hope is that my experiences will help me to better understand what I put various groups of software users through when they use the software I helped design and build.

An easy way to mess around is by changing default settings. For example, my iTunes isn’t set to English. This helps me understand the experience of users who learned one language at home as children and now use another language at work as adults. It’s not just beneficial to experience the initial pain of memorising where to click (as I become a rote user in a GUI I cannot read), but also the additional moments of frustration when I must do something new—an occasional task whose command vector I haven’t memorised.

Relating to the language challenges that some users face

Another easy way to mess around is to switch between iMac and Windows computers. It’s not just the little differences, such as whether the Minimise/Maximise/Close buttons are on the left or right sides of the title bar, or whether that big key on the keyboard is labelled Enter or Return.

Switching between operating systemsIt’s also the experience of inefficiency. It’s knowing you could work faster, if only the tool weren’t in your way. This also applies to successive versions of “the same” operating sytem. This is the frustration of the transfer user.

It’s noticing how completely arbitrary many design standards are—how arbitrarily different between operating systems—such as the End key that either does or doesn’t move the insertion point to the end of the line.

Another easy way to mess around is to run applications in a browser that’s not supported. I do it for tasks that matter, such as making my travel bookings.

All this occasional messing around is about training myself. The experiences I get from this broaden the range of details I ask developers to think about as they convert designs into code and into pleasing, productive user experiences.

In a separate IxDA discussion thread, a few people reacted to this blog post:

  • Try a Dvorak keyboard instead of a Qwerty keyboard (Johnathan Berger).
  • Watch children’s first use of a design (Brandon E.B. Ward).
  • Use only the keyboard, not the mouse (CK Vijay Bhaskar).
  • Sit in at the Customer Support desk for a day (Adrian Howard).
  • Search Twitter to find out how people feel about a product (Paul Bryan).

See also the comment(s) below, directly in this blog.

Testing in the UX-design process

Three weeks ago, a client called me. They had just completed release 1.0 of a new Web application that will replace their current flagship product. The client was asking about summative usability testing to evaluate how well the product performs in the hands of users, because they want their customers to succeed.

Since the product is an enterprise-wide product that requires training, one thing the client specifically asked about was whether the Help is a help to users.

Do you need help?A quick heuristic review I did turned up no obvious problems in the Help, so we decided on user observation with scenarios. In a preparatory dry run done a few weeks ago, I supplied a participant with a few scenarios and some sample data. The participant I observed was unable to start two of the scenarios, and completed the third scenario incorrectly by adding data to the wrong database.

The Help didn’t help her. The participant was able to find the right Help topic, but she completely misinterpreted the first step in the Help’s instructions.

The team had not anticipated the apparent problem that turned up during the dry run. Assuming it is a real problem—and this can’t be more than an assumption given the sample size of 1—this story nicely illustrates the benefit of summative testing, as you’ll see below.

Best practices working together

The team, including a product manager, several developers, a technical communicator as Help author, and me as a contract usability analyst, used these best practices:

  • The Help author used a single-sourcing method. The most common GUI names, phrases, and sentences, are re-used, inserted into many topics from one source, like a variable. In almost every Help topic, the problematic first step was one such re-usable snippet.
  • The product manager assesses the bugs based on severity and cost, ensuring the low-hanging fruit and most serious of defects get priority.
  • In a heuristic review of the Help, I (wearing a usability-analyst hat) did not predict that the first step in most topics would be misinterpreted. Heuristic reviews, when conducted by a lone analyst, typically won’t predict all usability problems.
  • The developers use an Agile method. At this stage of their development cycle, they build a new version of the product every Friday, and, after testing, publish it the following Friday.

After the dry run uncovered the apparent problem, the product manager said: “Let’s fix it.” Since the Help author used re-usable snippets, rewording in one place quickly fixed the problem throughout the Help. And the company’s Agile software development method meant the correction has already been published.

Was this the right thing to do? Should an error found by one participant during a dry run of upcoming usability tests result in a change? The team’s best practices certainly made this change so inexpensive as to be irresistible. With the first corporate customer already migrated to the new product, my client has a lot riding on this. I can’t be certain this rewritten sentence has improved the Help, but—along with the other bugs they’ve fixed—I know it increases my client’s confidence and pride in their product’s quality.

It’ll be interesting to see what the upcoming user observations turn up.

Reminding myself of things I already know

The actual user-observation sessions are still ten days away, but the dry run reminded me of things I already know:

  • Despite each professional’s best efforts, there will always be unanticipated outcomes where users are involved. Users have a demonstrated ability to be more “creative” than designers, developers, and content authors, simply by misinterpreting and making unintended use/misuse of our work.
  • The best practices in each discipline can dovetail and work together to allow rapid iteration of the product by the team as a whole. A faster response means fewer users will be affected and the cost of support—and of the rapid iteration—will be lower. A good development process adjusts practices across teams (product management, research, development, user experience, design, tech-comm, quality assurance) so the practices dovetail rather than conflict.
  • Summative testing helps validate and identify what needs to be iterated. Testing earlier and more often means that fewer or perhaps no users will be affected. Testing earlier and more often is a great way to involve users, a requirement for user-centred design, or UCD. It also changes the role of testing from summative to formative, as it shapes the design of the product before release, rather than after.

Design requires courage and trust, not just user involvement

Designing is usually a rewarding activity, but the path from start to finish can be filled with frustration and even panic. I’ve seen design processes work—and come to the realisation that “My own designs benefited from rapid iteration!”

The benefit of designThese humbling experiences helped me learn to trust the process, even in the face of frustration or panic. It’s these experiences that give me the courage to follow the design process, even when it isn’t clear how to resolve the tension between conflicting design constraints.

In the face of an unknown, individuals and especially teams tend to turn to knowns. If needed, they’ll manufacture the known data, by deferring the choice to users. Here’s part of what Larry Constantin wrote about courage in software design, in a paper that advocates for user involvement at the right time:

Most damning and least recognized among the limitations of user-centered design is the way it subtly discourages courage. Courage is one of the central tenets of extreme programming and agile development methods. […] User-centered design makes it too easy for designers to abdicate responsibility in deference to user preference, user opinion, and user bias. In truth, it is hard to stick with something you know works when users are screwing up their faces at it. What if you are wrong? What if you are not as good a designer as you thought you were? It takes real courage and conviction to stand up for an innovative design in the face of users who complain that it is not what they expected or who want it to work just like some other software or who object to certain sorts of features as a matter of course. It takes responsible judgment to know when to listen to users and when to ignore them.

In the many design sessions I have facilitated, three times I’ve seen that lack of courage expressed by a participant. Each time, it sounded like a mix of panic and frustration:

The solution has been on the wall since the first round!

The design sessions I facilitate ask participants to saturate the design space with lots of ideas. They each bring five sketches—five substantially different ideas—and then, after sharing their ideas with the other participants, they rapidly iterate the first 15 or 20 sketches to develop even more. All this takes place before any analysis.

When the goal is to saturate the design space—to identify as many solutions as possible in a short time—there’s more to influence the design once the analysis begins. Inevitably, the design that the team decides on was not already on the wall. Motivated design participants quickly learn this, and—in most cases—become advocates of the process.

For most development teams, the Five Sketches™ process I introduce is a departure from the status quo, so it takes courage for their team members to take a stand, to say “I will use this process” for design problems that need it.

Starting over on the right problem

If you’re designing a bedpan washer, do you design one that nurses don’t have to wait for?

According to a newspaper report, BC’s Centre for Disease Control, or CDC, found that a British Columbia hospital had

bedpan-cleaning machines that take 13 minutes for each cycle.

If they wanted to ensure each bedpan got returned to the right patient, nurses had to stand by for the duration. […]

The BC CDC found the [bedpan washing machines] to be inconvenient and too time consuming.

As additional disincentive for nurses to wait out the 13 minutes, the newspaper says: “If you don’t load the machine exactly right, they not only don’t work, they sometimes spray aerosolized feces on you when the door is opened.”

Oh dear.
                 Cleaning bedpans by machine

It’s easy to ask pointed questions after the fact, but here goes. Since nurses are too rushed to wait 13 minutes, would ethnographic study of hospitals have identified time pressure as a factor? Did researchers ask how long a nurse could or would wait for a bedpan washer? If the answer is “they won’t wait at all; they’ll go do something else,” then that reframes the design problem:

Can the machine track which bedpan gets returned to whom without relying on a nurse’s memory?

Can the machine clean bedpans so that it doesn’t matter to which patient they are returned?

These are very different design problems to solve. Other possible design questions to have asked:

Can the machine’s design prevent improper loading?

Can the machine not spray fecal matter at the person who opens it? Or, if this problem wasn’t predictable at the outset, Is the machine pleasant to use?

Can the machine be operated correctly by untrained users?

…and so on.

I’ve worked on projects where we thought we had the problem space clearly defined, and then—after exploring the design space and attempting to converge on a solution—realised that we had to redefine the problem and start over. I’d say that happens in about 20% of the projects I work on. I’ve also worked on a health product where we couldn’t change the hardware component, so we had to design a software and website solution to mitigate the hardware’s intermittent connectivity problem.

I don’t know anything about the design of the bedpan washer, above, but I understand that BC CDC implicated bedpans in the hospital’s outbreak of Clostridium difficile, and that the hospital switched to another cleaning method. The costs to the manufacturer are potentially horrendous. If the design team did everything right—including an iterative design process and early, user-involved testing—and still missed the mark, then they have my sympathy.

But now that they have a better understanding of the problem space—now that they know the “right” problem—they can design and build a better product.

Fat finger fone oops backspace

How tiny does the keyboard on a handset or smartphone need to be?

Data-entry trouble for fat fingers

If you ask me, I’d say: “Not anywhere near as tiny as they are.”

I’d also say: “If you make an app for iTouch or iPhone, ensure that the keyboard flips into a larger, wider version when users rotate the device on its side.”

Photos help user personas succeed

If your user persona includes an image, which type of image helps the team produce designs that are more usable?

frank-long-style-user-persona-pic

The illustration on the left?  Or the photo on the right?

According to Frank Long’s research paper, Real or Imaginary: The effectiveness of using personas in product designphotos are better than illustrations. Teams whose user personas include a photograph of the persona produce designs that rate higher when assessed with Nielsen’s heuristics for UI design.

Frank Long compared the design output of three groups, drawn from his students at National College of Art and Design (NCAD) in Ireland, in a specific design project. Over the five-week project, two groups used user personas of different formats. One group was the control group, so they worked without user personas. The experiment looked for differences in the heuristic assessments of their designs.

Photos—versus illustrations—are one of the ways I’ve engaged project teams with the user personas that I researched and wrote for them. Here’s a teaser:

A card game to help an Agile software team learn about their product’s user personas.