Put usability in your Agile backlog

We’ve all seen it: waterfall projects that deliver the wrong thing, too late. So we understand the appeal of the Agile method, delivering working software sooner, so the intended users—our clients and their customers—can provide feedback that steers us to deliver the right thing. Agile reporting tools also help us estimate how long the work will take, which makes it possible to deliver on time.

But there’s a tension between delivering the right thing and delivering on time. And as a UX practitioners, we sometimes see usability sacrificed in the rush to release on time. This happens despite the first of the Agile Manifesto’s principles:

  • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

Valuable software is usable software, among other things. Data about what’s usable comes from testing. And much of that testing can’t take place until after the development—done in Agile stories—is completed and signed off. Development teams—consisting of analysts, developers, testers, usability researchers, and interface designers—often consider an Agile story to be completed despite the lack of feedback from the intended customers about its usability—or unusability. We can do better. Continue reading “Put usability in your Agile backlog”

Would you have designed it that way?

In my day-to-day life, I often think about design problems as I encounter them. I find myself wondering about information that I don’t have—details that would help me solve the problem I noticed. And I wonder: faced with the same constraints, would I have come up with the same solution? Here’s one I encountered.

Passengers waiting to board a ferryLast week, some friends wanted to visit their family on an island. Where I live, people use ferries to get to travel between various islands and the mainland. At times, I’ve made the crossing on foot, by bus, or by passenger car. The choice might depend on the size of our group, how far we’re going on the other side, how much we want to spend, what time of day and year we’re travelling. On busy days the ferries fill to capacity, and traffic reports may announce “a 1- or 2-sailing wait” between points. From time to time the media discusses changes to ferry service, prices, and ridership. All in all, there are a lot of factors influencing the deceptively simple question: “When I get to the ferry, will there be space for me on board?” The question could also be: “Can I avoid waiting in line?”

The ferry company’s website answers this question in a seemingly fragmented way, and that got me thinking: why was the answer fragmented, and what user needs was the website’s current design meeting? The ferry company segments its audience by mode of travel. This segmentation is logical for an audience motivated by cost, because a ferry passenger on foot pays less than a ferry passenger in a car. But when other decision-making factors are more important than price—such as space availability—segmenting users by mode of travel might not be helpful.

Can I avoid waiting?

The friends I mentioned earlier had all the time in the world to get to their family on the island. But they didn’t want to wait in line for hours. Finding the answer to “is there space for us, or will we have to wait” is complicated because the answers seem to be organized by mode of travel on different pages of the website. Here’s a reproduction of one of the first “is there space for me” answers I found on the website:

Is there space on the ferry?

Given the question, the above screen may not be clear. What is deck space? And—look closely at the orange bar—how much deck space is available? Is it zero or 100%? Is a reservation the same thing as a ticket? Does everyone require a reservation to board?

Here’s another way to present the same information, this time making it clearer that a driver’s willingness to pay more may influence wait time:

No reserved spaces on the ferry

Now it’s clear that this information about availability only applies to vehicles that want a reservation. That means foot passengers, bus passengers, and cyclists still don’t have an answer to the “will we have to wait” question. From experience, frequent travellers already know part of the answer: passengers on foot almost never have to wait, but occasional travellers and tourists wouldn’t know this. And travellers with vehicles may wonder about alternatives, because leaving the car on shore and boarding on foot could put them on an earlier ferry. The answer to “can we avoid waiting” may require a comparison of wait times for each mode of travel.

Here’s another way to present the information, this time listing more modes of travel:

Different types of space on the ferry

The above screen answers the “can we avoid waiting” question more clearly. In addition to providing greater certainty for some modes of travel, it also meets the (presumed) business need of generating revenue by selling reservations.

Design questions, but no answers

It’s easy to theoretically “solve” a design problem that we encounter, but there are always unknowns.

  • Is there really a design problem? How would we know?
  • Would this design have been technically possible?
  • Would this design have been affordable?
  • Would this design have met the needs of many users, or only a few?
  • Would this design have been ill received by customers or interested groups?
  • and so on….

So if you can’t know all the answers, why bother with the exercise? Because it’s what we do, in our line of work.

The trigger for this exercise

Here’s an excerpt of the screen that inspired this post.

Excerpt of the original screen

Reduce spam without hindering usability

If your website lets visitors sign up, join in, add comments, or enter reviews, then—in addition to the legitimate details you want—you’re getting some garbage. Some of this garbage is sent by automated spam-bots.

You can reduce the unwanted entries that your website generates, but consider who pays the price to ensure your business’ data is clean. Does your business pay? …Or do your legitimate site visitors pay? Continue reading “Reduce spam without hindering usability”

When a user interface is for using—not for understanding—a product

The purpose of a user interface is not to explain how a product works. Instead, the interface is to help people use the product. Here’s an idea: if someone can use your product without understanding how it works, that’s probably just fine.

What model does the user interface reflect?

Models are useful to help people make sense of ideas and things.

An implementation model is how engineers and software developers think of the thing they’re building. It helps them to understand the product’s inner workings, the sum of its software algorithms and physical components. For example, a car mechanic has an implementation model of combustion engines.

A mental model is how someone believes a product behaves when they interact with it. It helps them to understand how to use the product. For example, a typical car driver has an implementation model of pressing the gas pedal to go faster and pressing the brake to slow down. This mental model doesn’t reflect how the car is built—there are many parts between the gas pedal and its spinning tires that typical drivers don’t know about.

The implementation model and the mental model can be very similar. For example, the mental model of using a wood saw is that “The saw makes a cut when I drags it back and forth across the wood.” This overlaps with the implementation model. In addition to the back-and-forth user action, the implementation model also includes an understanding of how the saw’s two rows of cutting edges—one for the forward stroke and one for the backward stroke—help to cut the wood fibers, break the cut fibers loose, and then remove the fibers from the kerf, and whether the saw’s tooth shape is better for cutting fresh wood or dried wood.

The mental- and implementation models can overlap, or not

The implementation model and the mental model can also be very different. Let’s consider another example: getting off a public-transit bus. The mental model of opening the exit doors is that “When the bus stops, I give the doors a nudge and then the doors open fully.” The implementation model of the exit doors is that, once the bus stops and the driver enables the mechanism, the exit doors will open when a passenger triggers a sensor. Now consider this: if the sensor is a touch sensor then the passenger’s mental model of “nudging the door” is correct. But, in fact, the sensor is a photoelectric sensor—a beam of light—and the passenger’s mental model of “nudging the door” is incorrect.

To exit, break the photoelectric beam

Getting bus passengers to break the photoelectric beam was a real-life design challenge that was solved in different ways. In Calgary, public-transit buses use a large, complex sign on exit doors to present a mental model that’s somewhat consistent with the implementation model:

Signage explains the complex implementation modelTO Signage for a simpler mental modelOPEN THE DOOR

      1. WAIT FOR GREEN LIGHT
      2. WAVE HAND NEAR DOOR HERE

In Vancouver, public-transit buses use a large, simple sign on exit doors to present a mental model that’s inconsistent with the implementation model:

TOUCH HERE ▓ TO OPEN

In fact, touch does not open the exit doors at all—not on the Vancouver buses or the Calgary buses I observed. Only when a passenger breaks the photoelectric beam will the doors open. In Calgary passengers are told to wave a hand near the door. A Calgary bus passenger might conclude that the exit door has a motion sensor (partly true) or a proximity sensor (not true).  In Vancouver passengers are told to touch a target, and the touch target is positioned so the passenger will break the photoelectric sensor beam when reaching for the target. A Vancouver bus passenger might conclude that the exit door has a touch sensor (not true).

Calgary bus passengers are more likely to guess correctly how the exit door actually works because the sign presents a mental model that partly overlaps the implementation model: the door detects hand-waving. But does that make it easier for someone without prior experience to exit the bus?

No, it’s harder.

It’s more difficult for a sign to get passengers to hold up a hand in the air in front of the door than it is to put a hand on the door. Here’s why: If you knew nothing about a door that you wanted to open outward, would you place a hand on the door and push? Or would you wave at it? From our lifelong experience with doors we know to push them open. Touching a door is more intuitive than waving at it, and that’s why “nudge the door” is a better mental model and thus an easier behaviour to elicit and train. The simpler mental model improves usability.

Rule of thumb for mental models

When understanding of a product’s inner workings are unnecessary, staying true to the implementation model risks increasing the complexity of the user interface. Instead, have the user interface, reflect a mental model that is simple, effective, and usable.

If you can relate the use of an object to a common experience or simple idea then do so—even if it doesn’t follow the implementation model. It is unnecessary for a system’s user interface to convey how the product was built. The user interface only needs to help users to succeed at their tasks.

No doubt there are cases where a lack of understanding of a product’s inner workings could cause danger to life and limb, or cause unintended destruction of property. In that case, the mental model needs to convey the danger or risk or, failing that, needs to overlap more with the implementation model.

If the user can’t use it, it’s broken

A few days ago, I tried to pump up my bicycle tires. I had to borrow a pump.

Bike-tire pumpThe connectors and attachments suggested this pump would fill North-American and European tire tubes as well as air mattresses, soccer balls, and basketballs.

But the thing is, neither the pump’s owner nor I were able to make it work. We couldn’t pump up my bike tires.

Was it me? Was it the pump’s owner? Or was it the pump’s design?

If the user can’t use it, it’s broken (…or it may as well be).

Natural mapping of light switches

I recently moved into a home where the light switches are all wrong. I was able to fix one problem, and the rest is a daily reminder that usability doesn’t just happen—it takes planning.

Poorly mapped light switches.
The switch on the left operates a lamp on the right, and vice versa. This is not an example of natural mapping.

On one wall, a pair of light switches was poorly mapped. The left switch operated a lamp to the right, and the right switch operated a lamp to the left. The previous resident’s solution to this confusing mapping was to put a red dot on one of the switches, presumably as a reminder. I put up with that for about three days. Continue reading “Natural mapping of light switches”

A banister has multiple user groups

We don’t always know what a design is intended to convey. We don’t always recognise or relate to a design’s intended user groups. But we don’t have to know everything that an object’s design is intended to do, in order to make effective use of the object.

I imagine the metal inserts in the wooden banister (see the video, above) are detectable warnings for people who are visually impaired, but that’s only a guess. If you watch the video again, you’ll see that the metal inserts do not occur at every bend in the staircase.

Whatever the intent, the banister fully met my needs.

Gestalt principles hindered my sudoku performance

Last week, while waiting for friends, I picked up a community newspaper in hopes of finding a puzzle to help me pass the time. I found a sudoku puzzle.

A sudoku puzzle consists of nine 3×3 squares, sprinkled with a few starter numbers. The player must fill in all the blanks by referring to the numbers that are already filled. A number can only occur once in each row of 9, each column of 9, and each 3×3 square.

I regularly complete difficult sudoku puzzles, but this easy one—more starter numbers makes the puzzle easier—was taking much longer than I expected.

I soon realised that my slow performance was due to a design decision by the graphic artist!

In the original puzzle, shown at left, the graphic designer used  shading  for all the starter numbers. In my reformatted version, on the right, I used shading to separate the 3×3 squares. Both puzzles also use thicker lines to separate the 3×3 squares.

gestalt-sudoku-puzzle

The shading for starter numbers, on the left, is unfortunate because it interferes with the player’s perception of the nine 3×3 squares. Instead, players perceive groups of numbers (in diagonals, in sets of two, and sets of five).

I assume the designer’s intention was to help identify the starter numbers. Regardless of the designer’s intention, the human brain processes the shading just as it processes all visual information: according to rules that cognitive psychologists call gestalt principles. A sudoku player’s brain—any human brain—will first perceive the shaded boxes as groups or sets.

gestalt-sudoku-circled

In sudoku, the grouping on the left is actually meaningless—and counterproductive. However, since the brain applies gestalt principles rather involuntarily and at a low level, the grouping cannot easily be ignored. The player must make a deliberate cognitive effort to ignore the disruptive visual signal of the original shading. This extra effort slows the player’s time-on-task performance.

You can check your own perception by comparing how readily you see diagonals and groups in both puzzles above. On the left, are you more likely to see two diagonals, two groups of five, and many groups of two? If you are a sudoku player, you’ll recognise that these groupings in the puzzle are irrelevant to the game.

If you like, you can print the puzzles at the top, and give them to different sudoku players. Which puzzle is faster to complete?

Interested in gestalt principles? I’ve blogged about the use of gestalt principles before.

Auto-correct a touch-screen problem

For the past few months, I’ve been taking an average of 1.6 flights per week on commercial airplanes. Most of these offered seatback entertainment, so I could watch the TV show or movie of my choice, or listen to satellite radio while reading. Touch-screen controls are easy to use because they let me touch—or tap—the item or the control that I want. By using the touch screen, I can select a program, adjust the volume, skip the next song, and so on.

One thing I’ve noticed is that about ¼ of seatback touch screens are poorly registered. By registration I mean that the system and the user agree on where the user is tapping or touching the screen:

An illustration of registration

I recorded a video of two common tasks for a seatback entertainment system: selecting the language and adjusting the volume. As you can see, the registration is off, so I initially get the French interface instead of the English, and I must press an unrelated button to adjust the sound:

The registration error is significant. My fingertip tapped about 2 cm left of the centre of the EN button. The larger the registration error, the harder to tap a small target—as was the case with the volume controls in the video, above, where I appear to be tapping the Fast-Forward button. On more than one flight I have unintentionally increased the sound to painful levels while attempting to lower the volume!

A system such as this could be made to detect and auto-correct poor registration. If we assume that repeat taps on a blank location indicates poor registration, the software could:

  1. After several repeat taps, select the nearest target—a reasonable guess—even if it is a centimetre or two away from the user’s tap.
  2. Ask the user to confirm the guess. “Did you mean [this one]?”
  3. If the user confirms, calculate the amount by which to correct the registration, and then fix the registration error.

This solution requires a screen—perhaps the start screen—whose choices are spaced far apart, so the system can detect when the user appears to be tapping a blank space:

Tapping a blank space (at right)

If user testing were to show that auto-correction needs human involvement, after calculating the registration error, the system could ask the user to check the corrected registration. For example:

Confirming that the registration is correct
Are you there? Please tap the green circle.

I haven’t done any testing of this idea, nor have I given this much thought, so I’m certain there are many more and better ways to auto-correct a registration problem on a touch screen. I merely wanted to identify one possible solution in order to get to the next point: the need to consider the business drivers when deciding to address (or deciding not to address) a usability problem.

Everything costs money

Fixing this problem—it’s a real problem, you’ve seen the video—would cost money. If the following can be quantified and evaluated within a framework of passenger-experience goals, there may be a convincing business case:

  • Not every passenger can work around a registration problem. Those who cannot would be unable to use the entertainment system. When everyone else gets a movie, how does the passenger with a failing system feel?
  • If a failed entertainment system is perceived as a negative experience, will passengers blame the touch-screen/software manufacturer or blame the airline? I’m sure you can imagine the complaint: “I sat there for hours without a movie! It’s the airline’s fault.” What’s the likelihood that this will cause churn (passenger switches to another brand next time)?
  • Based on the screens I’ve seen, some frustrated passengers must use hard objects that scratch and even gouge the touch screen. Are they trying to force the screen to understand what they want? Are they vandalising the screen? What’s the cost of replacing a damaged or vandalised screen?
  • A scratched screen is like graffiti. It affects every subsequent passenger in that seat. Do vandalised screens affect the airline’s goal of attaining a particular passenger rating for perceived quality or aesthetic experience?
  • The in-flight entertainment system was implicated in a catastrophic Swiss Air crash near Peggy’s Cove about a decade ago. Would a fix to the touch-screen registration problem incur prohibitive safety-testing costs?

User performance depends on conditions

In early June, in a hotel lobby, I stopped to observe someone troubleshooting a wireless connection. I’ve faced this challenge myself, since every hotel seems to have a slightly different process for connecting.

The person I was observing was visually impaired and had his GUI enlarged by about 1000% or more. As he attempted to troubleshoot his wireless connection, he was very rapidly scrolling horizontally and vertically in order to read the text and view the icons in the Wireless Connection Status dialog box. The hugely enlarged GUI flew around the screen. His screen displayed only a small portion of the total GUI, but he never lost his place.

Only part of the screen is visible

In contrast, I lost my place repeatedly. I couldn’t relate the different pieces of information, so what I saw was effectively meaningless to me much of the time. His spatial awareness—his ability to navigate quickly around a relatively large area—was clearly more developed than mine.

I could not keep up with all of the text, either, even when he was reading it to me out loud: “It says ‘Signal Strength” is 4 bars, but it won’t connect. See?” (Well, actually, I didn’t see.) Though I’m very familiar with this dialog box, I could only read the shorter words as they flew by on screen. The larger words were illegible to me. His ability to read rapidly-moving whole words when only parts of them were visible at any given instant was much more developed than mine. I felt sheepish about being functionally illiterate under these conditions.

Flying text is hard to read

It was interesting to see how my own user performance depends on such a narrow range of conditions. I need to see the whole word and its context. I need to see at least half the dialog box at once. And, if the image is moving, it must be moving slowly.