Auto-correct a touch-screen problem

Lately, my work has me flying frequently on commercial airplanes. Most of these flights offered seatback entertainment, so I could watch a TV show or movie of my choice, or listen to satellite radio while reading. Touch-screen controls are easy to use because they let me touch—or tap—the item or the control that I want, to select a program, adjust the volume, skip to the next song, and so on.

One thing I’ve noticed is that about ¼ of seatback touch screens are poorly registered. By registration I mean that the system and the user agree on where the user is tapping or touching the screen:

An illustration of registration

Continue reading “Auto-correct a touch-screen problem”

Leaner, more agile

This week, I’m attending a few days of training in agile software development, in an Innovel course titled Lean, Agile and Scrum for Project Managers and IT Leadership.

My first exposure to agile was in Desiree Sy‘s 2005 presentation, Strategy and Tactics for Agile Design: A design case study, to the Usability Professionals Association (UPA) annual conference in Montreal, Canada. It was a popular presentation then, and UPA-conference attendees continue to be interested in agile methods now. This year, at the UPA conference in Portland, USA, a roomful of usability analysts and user-experience practitioners discussed the challenges that agile methods present to their practice. One of the panellists told the room: “Agile is a response to the classic development problem: delivering the wrong product, too late.”

Continue reading “Leaner, more agile”

User performance depends on conditions

In early June, in a hotel lobby, I stopped to observe someone troubleshooting a wireless connection. I’ve faced this challenge myself, since every hotel seems to have a slightly different process for connecting.

The person I was observing was visually impaired and had his GUI enlarged by about 1000% or more. As he attempted to troubleshoot his wireless connection, he was very rapidly scrolling horizontally and vertically in order to read the text and view the icons in the Wireless Connection Status dialog box. The hugely enlarged GUI flew around the screen. His screen displayed only a small portion of the total GUI, but he never lost his place.

Only part of the screen is visible

In contrast, I lost my place repeatedly. I couldn’t relate the different pieces of information, so what I saw was effectively meaningless to me much of the time. His spatial awareness—his ability to navigate quickly around a relatively large area—was clearly more developed than mine.

I could not keep up with all of the text, either, even when he was reading it to me out loud: “It says ‘Signal Strength” is 4 bars, but it won’t connect. See?” (Well, actually, I didn’t see.) Though I’m very familiar with this dialog box, I could only read the shorter words as they flew by on screen. The larger words were illegible to me. His ability to read rapidly-moving whole words when only parts of them were visible at any given instant was much more developed than mine. I felt sheepish about being functionally illiterate under these conditions.

Flying text is hard to read

It was interesting to see how my own user performance depends on such a narrow range of conditions. I need to see the whole word and its context. I need to see at least half the dialog box at once. And, if the image is moving, it must be moving slowly.

Unusable sinks on Boeing planes

Usability isn’t just about web pages, as you’ll know if you’ve tried to dial a phone number on someone else’s cell phone. Or if you’ve tried to wash your hands on most Boeing airplanes built in the past 30 years:

Taps with awkward levers

The water only flows while you press the lever—one lever for cold water and one lever for warm water. It takes one hand continuously pressing to make the water flow. Rinsing one hand without the help of the other hand is difficult. Rinsing soap off is much easier when two hands do it together.

Some of the newer Boeing aircraft—like the 787 Dreamliner—may have better taps, but I’ve never been on one. An aircraft lasts decades, so passengers will be using those old sinks and taps for years to come, on Boeing planes. Airbus planes, on the other hand, have had ergonomic taps for years: one press starts the water flow, leaving both hands free for soaping and rinsing. After a fixed duration, the water stops flowing, but you can always press again to restart the water.

While I’m pointing out usability problems in the airline industry, Airbus doesn’t have clean hands. On the Airbus web site,  type a word in the Search box—the word bathroom, for example—and then press ENTER. Nothing happens. The ENTER key doesn’t start the search, but a mouse click does.

Click OK to start searching

It’s ironic. A design that requires me to move a hand from the keyboard to the mouse is a lot like design that requires me to move a hand from the sink basin to the lever.

This sugar packet is a movie

Whether it’s ethnographic research, usability research, or marketing research, I’ve learned that the best insights aren’t always gleaned from scheduled research.

Here’s a photo of impromptu research, conducted by Betsy Weber, TechSmith’s product evangelist. I was her research subject. Betsy recorded me pushing sugar packets around a table as I explained how I’d like Camtasia to behave.

Jerome demos an idea to Betsy. Photo by Mastermaq

Betsy takes information like this from the field back to the Camtasia team. There’s no guarantee that my idea will influence future product development, but what this photo shows is that TechSmith listens to its users and customers.

The ongoing stream of research and information that Betsy provides ensures better design of products that will be relevant and satisfying for TechSmith customers down the line.

Cognitive psych in poll design

The WordPress community recently ran a poll. Users were asked to choose one of 11 visual designs. The leading design got only 18% of the vote, which gives rise to such questions as:

  • Is this a meaningful win? The leader only barely beat the next three designs, and 82% voted for other designs.

WordPress pollI don’t know about the 18% versus 82%. I do wonder whether some of the entries triggered a cognitive process in voters that caused them to pay less attention to the other designs, which may bring the leading design’s razor-thin lead into question. This cognitive process—known as the “ugly option”—is used successfully by designers as they deliberately apply cognitive psychology to entice users to act. I’ll explain why, below, but I first want to explain my motivation for this blog post.

I’m using this WordPress poll as a jumping-off point to discuss the difficulty of survey design. I’m not commenting on the merit of the designs. (I never saw the designs up close.) And I’m certainly not claiming that people involved in the poll used cognitive psych to affect the poll’s outcome. Instead, in this blog, I’m discussing what I know about cognitive psychology as it applies to the design of surveys such as this recent WordPress.org poll.

Survey design affects user responses

If you’ve heard of the controversial Florida butterfly ballot in the USA’s presidential election in 2000, then you know ballot design—survey design—can affect the outcome. I live outside the USA, but as a certified usability analyst I regularly come across this topic in industry publications; since that infamous election, usability analysts in the USA have been promoting more research and usability testing to ensure good ballot design. I imagine that the Florida butterfly ballot would have tested poorly in a formative usability study.

The recent WordPress poll, however, would likely have tested well in a usability study to determine whether WordPress users could successfully vote for their choice. The question I have is whether the entries themselves caused a cognitive bias in favour of some entries at the expense of others.

It seems that one entry was entered multiple times, as dark, medium, and light variations. This seems like a good idea: “Let’s ask voters which one is better.” Interestingly, the visual repetition—the similar images—may have an unintended effect if you add other designs into the mix. Cognitive science tells us people are more likely to select one of the similar ones. Consider this illustration:

More people choose the leftmost image. The brain’s tendency to look for patterns keeps it more interested in the two similar images. The brain’s tendency to avoid the “ugly option” means it’ll prefer the more beautiful one of the two. Research shows that symmetry correlates with beauty across cultures, so I manipulated the centre image in Photoshop to make it asymmetrical, or “uglier”.

The ugly-option rule applies to a choice between different bundles of goods (like magazine subscriptions with different perks), different prices (like the bottles on a restaurant wine list), and different appearances (like the photos, above). It may have applied to the design images in the WordPress poll. The poll results published by WordPress.org lists the intentional variations in the table of results:

  • DR1: Fluency style, dark
  • DR2: Fluency style, medium
  • DR3: Fluency style, light

The variants scored 1st, 4th, and 6thIn addition to these three, which placed 1st, 4th, and 6th overall, it’s possible there were other sets of variations, because other entries may have resembled each other, too.

As a usability analyst and user researcher, I find this fascinating. Does the ugly-option rule hold true when there are 11 options? Was the dark-medium-light variation sufficient to qualify one of the three as ugly? Did the leading design win because it was part of a set that included an ugly option? And, among the 11 entries, how many sets were there?

There are ways to test this.

Test whether the poll results differ in teh absence of an ugly-option set. A|B testing is useful for this. It involves giving half the users poll A with only one of the dark-medium-light variants, and the other half poll B with all three variants included. You can then can compare the two result sets. If there is a significant difference, then some further combinations can be tested to see if other possible explanations can be ruled out.

For more about the ugly option and other ways to make your designs persuasive, I recommend watching Kath Straub and Spencer Gerrol in the HFI webcast, The Science of Persuasive Design: Convincing is converting, with video and slides. There’s also an audio-only podcast and an accompanying white paper.

Eyetracking: “I’m typical”

If you’ve ever wondered where exactly on your web site or software your readers or users are looking, eye tracking will tell you that. The eye-tracking equipment emits a specific wavelength of light (invisible to humans) that helps the eye tracker to follow your eyes. As the light bounces off your retinas and back to the eye-tracker’s camera, its software calculates where you were looking, and for how long.

There are different ways to display the results. You can see the data as a “video” that shows a sequence of dots, everywhere you looked. Larger dots are longer fixations. You can also see the data as a cumulative heat map, similar to this:

 eye-tracking

Here’s something interesting I learned about myself. When I participate in an eye-tracking study that studies a photograph—such as a full-page magazine ad—I look at all the same places for about the same duration as other participants in the study. I know this because the composite heat map, which combines the eye-tracking data of all the participants into one heat map, looks indistinguishable from my individual heat map. It turns out I’m normal, after all.

Eye tracking has helped researchers answer questions such as these:

If you’re interested in eye tracking and usability and want to read more, try Eye Tracking as Silver Bullet for Usability Evaluations?  by Markus Weber.

Usability of a potential design

Three-quarters of the way through a Five Sketches™ session, to help iterate and reduce the number of possible design solutions, the team turns to analysis. This includes a usability analysis.

 generative-design-stage-3

After Œ informing and defining the problem  without judgement  and  generating and sketching lots of ideas  without judgment , it’s often a relief for the team to start Ž  analysing and judging  the potential solutions by taking into account the project’s business goals, development goals, and usability goals).

But what are the usability goals? How can a team quickly assess whether potential designs meet those usability goals? One easy answer is to provide the team with an project-appropriate checklist.

Make your own checklist. You can make your own or find one on the Internet. To make your own, start with a textbook that you’ve found helpful and inspiring. For me, that’s About Face by Alan Cooper. To this, I add things that my experience tells me will help the team—my “favourites” or my pet peeves. In this last category I might consult the Ribbon section of the Vista UX Guide, the User Interface section of the  iPhone human-interface guidelines, and so on.

[local /wp-content/uploads/2009/04/make-usability-checklists.wmv]

Ten-year-old advice

Fresh advice, still:

“Usability goals are business goals. Web sites that are hard to use frustrate customers, forfeit revenue, and erode brands.

Executives can apply a disciplined approach to improve all aspects of ease-of-use. Start with usability reviews to assess specific flaws and understand their causes. Then fix the right problems through action-driven design practices. Finally, maintain usability with changes in business processes.”

—McCarthy & Souza, Forrester Research, September 1998

How to test earlier

Involving users throughout the software-development cycle is touted as a way to ensure project success. Does usability testing count as user contact? You bet! But since most companies test their products later in the process, when it’s difficult to react meaningfully to the user feedback, here are two ways to get your testing done sooner.

Prioritise. Help the Development team rank the importance of the individual programming tasks, and then schedule the important tasks to complete early.

  • Prioritise and schedule the tasksIf a feature must be present in order to have meaningful interaction, then develop it sooner.
  • For example, email software that doesn’t let you compose the message is meaningless. To get meaningful feedback from users, they need to be able to type an e-mail.

    Developers often want to start with the technologically risky tasks. Addressing that risk early is good, but it must be balanced against the risk of a product that’s less usable or unusable.

  • If a feature need not be present or need not be working fully in order to have meaningful interaction, then provide hard-coded actions in the interim, and add those features later.
  • For example, if the email software lets users change the message priority from Standard to Important, hard-code it for the usability test so the priority is always Standard.

  • If a less meaningful feature must to be tested because of its importance to the business strategy, then develop it sooner.
  • For example, email software that lets users record a video may be strategically important for the company, though users aren’t expected to adopt it widely until most laptops ship with built-in cameras.

Schedule. For each feature to be tested, get the Development team to allocate time to respond to usability recommendations, and then ensure this time is neither reallocated to problem tasks, nor used up during the initial development effort of the to-be-tested features. Engage the developers by:

  • Sharing the scenarios in advance.
  • Updating them on your efforts to recruit usability-study participants.
  • After developers incorporate your recommendations, retesting and then reporting improvements in user performance.

Development planning that prioritises programming tasks based on the need to test, and then allows time in the schedule to respond to recommendations, is more likely to result in usable, successful products.