Designing with research has been easier

About ten years ago, usability testing of software got a lot easier, thanks to improved tools that let us see our products in the hands of users, along with the faces, voices, and actions of test participants. But then along came mobile devices, and usability testing of apps again became difficult.

Research needed an expensive facility

Around the turn of the century, it was possible to measure how an interface performed in the hands of customers, but that required an expensive lab that had cameras, microphones, and seats for observers behind one-way glass. In those days, only large companies had a budget for usability labs, so smaller companies made design decisions based on best guesses and opinions.

Due to budget constraints, I had no access to a usability lab. Instead, I would talk to software developers about the user interfaces we were building. “We can put text in the interface to explain how it works,” I’d say, and: “We can use different controls so it becomes more obvious how to use the product.” The developers and I were all interested in quality, but we didn’t always agree on what quality might look like. We relied on our opinions and personal biases to predict how software interfaces would perform in the hands of customers.

Then research got easy and affordable

One day, almost a decade ago, I heard about TechSmith Morae. This software was a game changer because it could turn any computer into a usability lab. TechSmith’s product evangelist, Betsy Weber and her user-research colleague, Josie Scott attended industry conferences and spoke tirelessly talked about this miracle product that could record someone’s actions—clicks and typing—along with everything on a computer screen, plus their face and voice. All we had to do was plug in a camera and microphone, because these were the days before laptops had built-in cameras and microphones. Usability practitioners embraced this product. We gave people tasks to complete while we used Morae to watch them in action.

Suddenly, we could invite developers and other stakeholders to watch live user testing from an adjacent room, where they could watch and listen in near real time. They could see the participant’s puzzled expressions. They could see where the user was mousing, what they looked at and clicked, and what they overlooked. We could also record everything and then, from the recordings, make video clips to show which parts of our software caused participants to struggle. Suddenly, it was easy to help every team member understand the plight of their customers.

One of my earliest participants, during a half-hour usability test, went from blaming herself to expressing extreme dislike of the product. For the development team, I made a video clip of the key points to show how the participant’s attitude toward the product changed from neutral to extreme dislike, over 28 minutes. The participant gave us permission to use the video for product research, but not for this blog, so I’ve paraphrased the video’s story here:

Test participant after 2 minutesTest participant after 8 minutesTest participant after 19 minutesTest participant after 28 minutes
This video was incredibly persuasive, because it showed the participant’s emotional reactions. When testing a product’s usability research, it’s humbling to see the product fail and awful to the product cause frustration and anger. But the point is to identify and fix problems before customers see it, and to get the development team thinking about involving users during the design process. (Usability testing also allowed me show user delight at innovative new features that worked well, to reinforce that our user-centered design process was working.)

Around the same time that TechSmith released Morae, Web 2.0 enabled the development of competing tools that partially overlapped with Morae’s features. The majority of these tools only worked on web pages, not on installed apps. Also, the majority of these tools did not let researchers see and hear their users in action, as you can read in the descriptions of testing tools from 2009.

Mobile made research difficult again

Much as we love mobile devices, they’ve make usability testing harder. Diverse operating systems and the free movement of mobile devices present challenges that we haven’t seen for almost a decade. While the tools that assess websites still work, there are no tools that provide rich data about installed apps. We’re back to external cameras and the rigs that hold them, and we have to ask participants to keep their phones in a fixed position for the camera. So we’re back to expensive labs and special equipment. What we need is software that can do on a mobile phone what software can already do on a laptop computer: capture and transmit

  • the app’s screen
  • the participant’s voice
  • the participant’s facial expressions
  • the participant’s taps or gestures
  • the participant’s typing or speech-to-text input

Ironically, smartphones, many tablets, and most hybrid devices have the required camera and microphone. Unfortunately, (in January 2014) no company offers software that can capture and transmit the data from mobile devices that run Android-, iOS-, Windows-, and BlackBerry operating systems. It’s especially the camera and voice data that helps researchers to understand how participants feel—are they puzzled, frustrated, or delighted?

Not giving up

Development teams tend to be made up of tenacious and skilled people—including business analysts, designers, developers—and they’ll follow the evidence. As practitioners, we want development teams to let go of the old ways and get them evolving toward evidence-based, user-centered design. And we’ll continue to look for ways to measure the empirical performance and the emotional impact of our designs, through usability testing.

Usability testing is such an excellent way to show development teams what’s usable. Building a product that is measurably more usable leads to persuasive case studies that show the benefit of usability and user experiences. It’s too bad that easy measurement tools are currently missing for apps that run on mobile devices. But a few challenges haven’t stopped us before, and won’t stop us now.

Modernist design: Beyond flat and simple

In recent years the big players in software have adopted modernist design for their user interfaces. With this redesign, digital comes of age, with a look and feel that’s no longer bound by last century’s conventions or by those new to computing. A modernist user interface focuses people on their current task, supports fast-paced use, and embraces the fact that the interfaces are digital. The intent is to help people to learn and to use software more easily—and you’ve no doubt seen and used modernist interfaces, especially on your phones and tablets, in Google products, and on Windows 8 and later.

It’s easy to reduce modernism to two guidelines: put less on the screen, and make what’s still there look flatter. While modernist software works well, blindly applying those two guidelines without understanding the underlying principles can lead to puzzling and inconsistent experiences. In some cases—including in products by Apple and Microsoft—fewer items and less visual detail on screen has resulted in the removal or omission of the necessary cues that separate content from controls, the cues that allow people to learn and use the software effectively.

Fewer elements on the screen

Modernism calls for a simpler interface. With fewer choices on the screen at any given time, people can work faster and people are more likely to choose what they need—provided the right elements are present at the right time. To get this right requires an excellent understanding of the tasks customers need to do, and when. Research on customers before the design stage is key, and testing with customers during the design stage is key. Simply putting less on the screen without testing how the interface performs in the hands of customers is a gamble.

Visually simpler elements on the screen

Modernism calls for an interface that is stripped of distracting ornamentation. With less visual noise on the screen, the interface is less demanding and—again—easier and faster to use. Simply removing extra lines, extra colours, and extra words without testing how the interface performs in the hands of users is a gamble. Here’s why.

In the illustration, example 1 shows a button that clearly has affordance. On screen, the button looks like a 3D object that we could press in the real world. The button’s gradient fill and shadow make it appear 3D. Most people know this 3D appearance from earlier versions of Windows and from the first editions of iPhone.

Examples of visual affordance

In example 2, the 3D appearance and shadow are removed. Most people first noticed this flattened look for buttons in Google Chrome and for tiles in Windows 8. With most of the affordances stripped away, in some contexts people have trouble recognising that these boxes are buttons to click or tap, or an object to drag or swipe.

It is possible to strip away the ornamentation without going too far. In example 2, the Search button (far right) has a slight gradient, slightly rounded corners, and a slight shadow on roll-over, all of which subtly help identify this element as a button. This button is from Google Gmail’s search box.

Example 3 shows “buttons” that have the background rectangle removed, leaving only text. For many years, blue text —especially when underlined—was used to identify a hyperlink. Most people know this link appearance from the Internet’s first two decades.

Examples of visual affordance

In example 4, the colour and underline are also removed. You may have noticed these plain, modernist links in mobile devices such as Windows Phone and applications such as Microsoft Zune. With the colour stripped away, in some contexts people have trouble recognising which text is a link to click, tap, or swipe.

It is possible to strip away the background colour and text decoration and still provide cues that people can interact with this text. For example, in the illustration, below, the row of links extends off the right edge of the screen to suggest they can be swiped. Extending off the screen is easy to do, for example by enlarging the text.

Swiping to see more

Example 5 shows two icons that have lots of realistic detail. This resemblance to physical objects is called skeuomorphism. For many years, skeuomorphism increased as improvements to computer screens allowed more colour, better display resolution, and thus more realism. During this time, each icon typically also had a label to help make the icon’s meaning clear.

Examples of visual affordance

In example 6, the skeuomorphism and labels have disappeared, or are hidden. Most people first noticed the flattened, modernist icons in Google apps and in Microsoft Office 2013. In some contexts, people don’t recognise the icons without their labels, and either don’t know how to or don’t think to reveal the labels.

It is possible to see the labels on demand. For example, Windows Phone provides an icon to tap that reveals the labels, as illustrated below.

New cues for digital interfaces

The ellipsis (the “…”) is a cue to users that “there’s more.” This cue is necessary—so it hasn’t been stripped away in this modernist design. This is also an example of a cue that’s natively digital. It’s not possible in the physical world to show and hide content because paper is static, not interactive.

On earlier iPhones and iPads, some apps would use skeuomorphic detail to cue people about how an app functioned. For example, an image of spiral-coil binding (below, left) signals that there may be more content on another page, and page-curl transitions that mimic a turning page reinforce this as the user navigates from screen to screen. In contrast, the more modernist design (below right) uses a cue that’s only possible in the digital world. The screen uses content peeking (look closely at the right edge) to signal that there is more content on another page.

Skeuomorphism

Some design challenges, as the content-peeking example shows, have excellent solutions that comply with modernist design principles: simple, less on the screen, and natively digital. Other design challenges have yet to be solved.

Natively digital

One example of an unsolved challenge is the Windows Phone calculator, whose functions change when the device is rotated. Rotating the device causes the calculator to switch from standard calculator to scientific calculator if rotated clockwise, and from standard calculate to binary-and-hexadecimal calculator if rotated counter clockwise. This exemplifies what it means to be truly digital—in the real world, a calculator on your desk cannot change its buttons, no matter how you rotate it.

Windows Phone calculator

However, the current design fails because there is nothing in the interface to suggest that these functional changes will occur, so there’s no reason to rotate the screen in order to discover them. This design challenge calls for a solution—a visible button or icon or other explicit path—that clearly toggles through the calculator’s three states, for example by forcing the screen orientation to change for at least a few seconds, and temporarily ignoring the device’s current physical orientation, or perhaps locking it in place.

User interfaces that enable full product use

To be “usable” requires not only that people can successfully use software tools to perform their tasks, but also that they can locate and recognise those tools. This used to be easier to accomplish when there were fewer platforms.

The big players are changing the way user interfaces look and feel by applying modernist design to make the truly digital. At the same time, software is changing because the way people use it, where they use it, and with whom they use it is changing. As development teams continue to provide possibilities, they need to work with interface designers, usability analysts, and members of their target audience—the users or customers. Together, we can ensure the new interactions and simplified interfaces we design and build will tell people how the interface works and both help and remind people to find the functions and content they need. It’s not about making things flat and removing clutter; it’s about getting the interface right while applying modernist principles so people can use the product fully.

Postscript
I came across a video, titled Everything is a remix, that explains the “why” of modernism.

Overcoming an initial language barrier

Imagine completing a form that asks for your initials—the first letters of each of your names. Now imagine you’re from a culture where your name isn’t written in letters, but in strokes and characters.

Some basic stroke orders of Chinese characters

In some cases, the concept of initials needs clarification, as the sign below indicates. The sign, spotted at Simon Fraser University by Seanna Takacs, explains in Chinese how to identify one’s initials.

A sign that explains what ''initials'' are

Isn’t this clever? Apparently, the staff only needs to point at the sign, and it does the rest.

But this raises some questions

  • Was the form tested on all audiences?
  • Are initials necessary, or could the form ask for different information?

Would you have designed it that way?

In my day-to-day life, I often think about design problems as I encounter them. I find myself wondering about information that I don’t have—details that would help me solve the problem I noticed. And I wonder: faced with the same constraints, would I have come up with the same solution? Here’s one I encountered.

Passengers waiting to board a ferryLast week, some friends wanted to visit their family on an island. Where I live, people use ferries to get to travel between various islands and the mainland. At times, I’ve made the crossing on foot, by bus, or by passenger car. The choice might depend on the size of our group, how far we’re going on the other side, how much we want to spend, what time of day and year we’re travelling. On busy days the ferries fill to capacity, and traffic reports may announce “a 1- or 2-sailing wait” between points. From time to time the media discusses changes to ferry service, prices, and ridership. All in all, there are a lot of factors influencing the deceptively simple question: “When I get to the ferry, will there be space for me on board?” The question could also be: “Can I avoid waiting in line?”

The ferry company’s website answers this question in a seemingly fragmented way, and that got me thinking: why was the answer fragmented, and what user needs was the website’s current design meeting? The ferry company segments its audience by mode of travel. This segmentation is logical for an audience motivated by cost, because a ferry passenger on foot pays less than a ferry passenger in a car. But when other decision-making factors are more important than price—such as space availability—segmenting users by mode of travel might not be helpful.

Can I avoid waiting?

The friends I mentioned earlier had all the time in the world to get to their family on the island. But they didn’t want to wait in line for hours. Finding the answer to “is there space for us, or will we have to wait” is complicated because the answers seem to be organized by mode of travel on different pages of the website. Here’s a reproduction of one of the first “is there space for me” answers I found on the website:

Is there space on the ferry?

Given the question, the above screen may not be clear. What is deck space? And—look closely at the orange bar—how much deck space is available? Is it zero or 100%? Is a reservation the same thing as a ticket? Does everyone require a reservation to board?

Here’s another way to present the same information, this time making it clearer that a driver’s willingness to pay more may influence wait time:

No reserved spaces on the ferry

Now it’s clear that this information about availability only applies to vehicles that want a reservation. That means foot passengers, bus passengers, and cyclists still don’t have an answer to the “will we have to wait” question. From experience, frequent travellers already know part of the answer: passengers on foot almost never have to wait, but occasional travellers and tourists wouldn’t know this. And travellers with vehicles may wonder about alternatives, because leaving the car on shore and boarding on foot could put them on an earlier ferry. The answer to “can we avoid waiting” may require a comparison of wait times for each mode of travel.

Here’s another way to present the information, this time listing more modes of travel:

Different types of space on the ferry

The above screen answers the “can we avoid waiting” question more clearly. In addition to providing greater certainty for some modes of travel, it also meets the (presumed) business need of generating revenue by selling reservations.

Design questions, but no answers

It’s easy to theoretically “solve” a design problem that we encounter, but there are always unknowns.

  • Is there really a design problem? How would we know?
  • Would this design have been technically possible?
  • Would this design have been affordable?
  • Would this design have met the needs of many users, or only a few?
  • Would this design have been ill received by customers or interested groups?
  • and so on….

So if you can’t know all the answers, why bother with the exercise? Because it’s what we do, in our line of work.

The trigger for this exercise

Here’s an excerpt of the screen that inspired this post.

Excerpt of the original screen

Reduce spam without hindering usability

Updated Aug 2013: more ratings of anti-spam choices and more links.

If your website lets visitors sign up, join in, add comments, or enter reviews, then—in addition to the legitimate details you want—you’re getting some garbage. Some of this garbage is sent by automated spam-bots.

You can reduce the unwanted entries that your website generates, but consider who pays the price to ensure your business’ data is clean. Does your business pay? …Or do your legitimate site visitors pay?

Choose usability … and less spam

Anti-spam options can present users with difficult tasksGarbage data may be a problem for you. But don’t punish your site’s legitimate visitors by making them do your anti-spam work. Many spam-fighting choices burden site visitors with extra tasks—and these tasks can be difficult. Fortunately, some spam-fighting choices don’t burden your visitors.

Here’s a usability assessment* of common anti-spam choices.

Anti-spam choice Simple Easy Quick Accessible Total
Do nothing: Don’t automate your spam-fighting. Instead, assess each form and each comment by hand. 4
Spam-filtering service: A third-party service assesses the input and flags likely spam. Read more. 4
Load- and submit time: If a bot fills out the entire form faster than a human could, their data is discarded. Read more. 4
Duration of focus: If a bot fills each box on the form faster than a human could, their data is discarded. Read more. 4
Unique tokens: It can be harder for a bot to get a unique token each time they fill a form. Read more. 4
Invisible to humans: If a bot fills in a data-entry box humans can’t see—a honeypot—their data is discarded. Read more. And more. Still more.  ½
Invisible to spam-bots: Humans must select an option or enter data in a box that spam-bots cannot see. Read more.  ½
Social-media login: Users authenticate by signing in with their social-media account. Read more. ½ ½ 3
Logic question: Users answer a logic question that requires moving or sorting objects. Read more. ½
Review-details page: Users review an extra page that bots do not analyze. 2
SMS verification: On the form, users enter a code that they receive via text message. ½ 2
CAPTCHA™: Users enter the text from a distorted image into a field. Read more. 0

 

* This is how the choices were scored:

  • Simple. If the task is readily understood the first time by most people, it gets a ✓ in the Simple column. The first time you encountered a CAPTCHA™, was the task readily understood?
  • Easy. If the task is done correctly most times by most people, it gets a ✓ in the Easy column. How often have you failed at enter the correct CAPTCHA strong?
  • Quick. If the task is completed quickly by most people, it gets a ✓ in the Quick column.
  • Accessible. If the task is not a hurdle to users who rely on assistive technology, it gets a ✓ in the Accessible column.
  • If the choice adds no task for the user, then it gets a ✓ in each column. (In the table, these rows are shaded.)
  • A task that can be skipped gets only ½ in each column, because the user must process the information before deciding to skip the task.
  • Total. The sum of a row’s Simple, Easy, Quick, and Accessible scores.

As you encounter more anti-spam choices, you can use the ratings above to assess whether a choice is simple, easy, quick, and accessible. You can also add columns for other measures, as needed.

One of the surprises in this method is how many choices are less desirable—from the perspective of usability—than “Doing nothing.” Have another look at the table, and see how the first choice—”Doing nothing”—compares to the other choices.

When a user interface is for using—not for understanding—a product

The purpose of a user interface is not to explain how a product works. Instead, the interface is to help people use the product. Here’s an idea: if someone can use your product without understanding how it works, that’s probably just fine.

What model does the user interface reflect?

Models are useful to help people make sense of ideas and things.

An implementation model is how engineers and software developers think of the thing they’re building. It helps them to understand the product’s inner workings, the sum of its software algorithms and physical components. For example, a car mechanic has an implementation model of combustion engines.

A mental model is how someone believes a product behaves when they interact with it. It helps them to understand how to use the product. For example, a typical car driver has an implementation model of pressing the gas pedal to go faster and pressing the brake to slow down. This mental model doesn’t reflect how the car is built—there are many parts between the gas pedal and its spinning tires that typical drivers don’t know about.

The implementation model and the mental model can be very similar. For example, the mental model of using a wood saw is that “The saw makes a cut when I drags it back and forth across the wood.” This overlaps with the implementation model. In addition to the back-and-forth user action, the implementation model also includes an understanding of how the saw’s two rows of cutting edges—one for the forward stroke and one for the backward stroke—help to cut the wood fibers, break the cut fibers loose, and then remove the fibers from the kerf, and whether the saw’s tooth shape is better for cutting fresh wood or dried wood.

The mental- and implementation models can overlap, or not

The implementation model and the mental model can also be very different. Let’s consider another example: getting off a public-transit bus. The mental model of opening the exit doors is that “When the bus stops, I give the doors a nudge and then the doors open fully.” The implementation model of the exit doors is that, once the bus stops and the driver enables the mechanism, the exit doors will open when a passenger triggers a sensor. Now consider this: if the sensor is a touch sensor then the passenger’s mental model of “nudging the door” is correct. But, in fact, the sensor is a photoelectric sensor—a beam of light—and the passenger’s mental model of “nudging the door” is incorrect.

To exit, break the photoelectric beam

Getting bus passengers to break the photoelectric beam was a real-life design challenge that was solved in different ways. In Calgary, public-transit buses use a large, complex sign on exit doors to present a mental model that’s somewhat consistent with the implementation model:

Signage explains the complex implementation modelTO Signage for a simpler mental modelOPEN THE DOOR

      1. WAIT FOR GREEN LIGHT
      2. WAVE HAND NEAR DOOR HERE

In Vancouver, public-transit buses use a large, simple sign on exit doors to present a mental model that’s inconsistent with the implementation model:

TOUCH HERE ▓ TO OPEN

In fact, touch does not open the exit doors at all—not on the Vancouver buses or the Calgary buses I observed. Only when a passenger breaks the photoelectric beam will the doors open. In Calgary passengers are told to wave a hand near the door. A Calgary bus passenger might conclude that the exit door has a motion sensor (partly true) or a proximity sensor (not true).  In Vancouver passengers are told to touch a target, and the touch target is positioned so the passenger will break the photoelectric sensor beam when reaching for the target. A Vancouver bus passenger might conclude that the exit door has a touch sensor (not true).

Calgary bus passengers are more likely to guess correctly how the exit door actually works because the sign presents a mental model that partly overlaps the implementation model: the door detects hand-waving. But does that make it easier for someone without prior experience to exit the bus?

No, it’s harder.

It’s more difficult for a sign to get passengers to hold up a hand in the air in front of the door than it is to put a hand on the door. Here’s why: If you knew nothing about a door that you wanted to open outward, would you place a hand on the door and push? Or would you wave at it? From our lifelong experience with doors we know to push them open. Touching a door is more intuitive than waving at it, and that’s why “nudge the door” is a better mental model and thus an easier behaviour to elicit and train. The simpler mental model improves usability.

Rule of thumb for mental models

When understanding of a product’s inner workings are unnecessary, staying true to the implementation model risks increasing the complexity of the user interface. Instead, have the user interface, reflect a mental model that is simple, effective, and usable.

If you can relate the use of an object to a common experience or simple idea then do so—even if it doesn’t follow the implementation model. It is unnecessary for a system’s user interface to convey how the product was built. The user interface only needs to help users to succeed at their tasks.

No doubt there are cases where a lack of understanding of a product’s inner workings could cause danger to life and limb, or cause unintended destruction of property. In that case, the mental model needs to convey the danger or risk or, failing that, needs to overlap more with the implementation model.

Chip-card usability: Remove the card to fail

Card readerI went to the corner store, made a purchase, and tried to pay by using a chip card in a machine that verifies my PIN. My first attempt failed, because I pulled my card out of the card reader too soon, before the transaction was finished. I should add that I removed my card when the machine apparently told me so.

The machine said: “REMOVE CARD”

And just as I pulled my card out, I noticed the other words: “PLEASE DO NOT”

Have you done this, too…?

Since making a chip-card payment is an everyday task for most of us, I wonder: “What design tweaks would help me—and everyone else—do this task correctly the first time, every time?” Who would have to be involved to improve the success rate?

Ideas for a usable chip-card reader

A bit of brain-storming raised a list of potential solutions.

  • Less shadow. Design the device so it doesn’t cast a shadow on its own screen. The screen of card reader I used was sunk deeply below its surrounding frame, and the frame cast a shadow across the “PLEASE DO NOT” phrase. (See the illustration.)
  • Better lighting. Ask the installer to advise the merchant to reduce glare at the cash register, by shading the in-store lighting and windows.
  • Freedom to move. The device I used was mounted to the counter, so I couldn’t turn it away from the glare.
  • Layout. Place the two lines of text—”PLEASE DO NOT” and “REMOVE CARD”—closer together, so they’re perceived as one paragraph. When perceived as separate paragraphs, the words “REMOVE CARD” are an incorrect instruction.
  • Capitalisation. Use sentence capitalisation to show that “remove card” is only part of an instruction, not the entire instruction.
  • Wording. Give the customer a positive instruction: “Leave your card inserted” could work. But I’d test with real customers to confirm this.
  • Predict the wait time. Actively show the customer how much longer to wait before removing their card. 15 seconds…, 10 seconds…, and so on.
  • Informal training. Sometimes, the cashier tells you on which side of the machine to insert your card, when to leave it inserted, and when to remove it.
  • Can you think of other ideas?

Listing many potential ideas—even expensive and impractical ones—is a worthwhile exercise, because a “poor” idea may trigger other ideas—affordable, good ideas. After the ideas are generated, they can be evaluated. Some would be costly. Some might solve one problem but cause another. Some are outside of the designers’ control. Some would have to have been considered while the device was still on the drawing board. Some are affordable and could be applied quickly.

Making improvements

Designers of chip-card readers have already made significant improvements by considering the customer’s whole experience, not just their use of the card-reader machine in isolation. In early versions, customers would often forgot their cards in the reader. With a small software change, now, the card must be removed before the cashier can complete the transaction. This dependency ensures customers take their card with them after they pay. One brand of card reader is designed for customers to insert their card upright, perpendicular to the screen. This makes the card more obvious, and—I’m giving the designer extra credit—the upright card provides additional privacy to help shield the customer’s PIN from prying eyes. These changes show that the design focus is now on more than just verifying the PIN; it’s about doing it quickly and comfortably, without compromising future use of the card. It’s about the whole experience.

A good hardware designer works with an interaction designer to make a device that works well in its environment. A good user-experience designer ensures customers can succeed with ease. A good usability analyst tests the prototypes or early versions of the device and the experience to find any glitches, and recommends how to fix them.

Gestalt principles hindered my sudoku performance

Last week, while waiting for friends, I picked up a community newspaper in hopes of finding a puzzle to help me pass the time. I found a sudoku puzzle.

A sudoku puzzle consists of nine 3×3 squares, sprinkled with a few starter numbers. The player must fill in all the blanks by referring to the numbers that are already filled. A number can only occur once in each row of 9, each column of 9, and each 3×3 square.

I regularly complete difficult sudoku puzzles, but this easy one—more starter numbers makes the puzzle easier—was taking much longer than I expected.

I soon realised that my slow performance was due to a design decision by the graphic artist!

In the original puzzle, shown at left, the graphic designer used  shading  for all the starter numbers. In my reformatted version, on the right, I used shading to separate the 3×3 squares. Both puzzles also use thicker lines to separate the 3×3 squares.

gestalt-sudoku-puzzle

The shading for starter numbers, on the left, is unfortunate because it interferes with the player’s perception of the nine 3×3 squares. Instead, players perceive groups of numbers (in diagonals, in sets of two, and sets of five).

I assume the designer’s intention was to help identify the starter numbers. Regardless of the designer’s intention, the human brain processes the shading just as it processes all visual information: according to rules that cognitive psychologists call gestalt principles. A sudoku player’s brain—any human brain—will first perceive the shaded boxes as groups or sets.

gestalt-sudoku-circled

In sudoku, the grouping on the left is actually meaningless—and counterproductive. However, since the brain applies gestalt principles rather involuntarily and at a low level, the grouping cannot easily be ignored. The player must make a deliberate cognitive effort to ignore the disruptive visual signal of the original shading. This extra effort slows the player’s time-on-task performance.

You can check your own perception by comparing how readily you see diagonals and groups in both puzzles above. On the left, are you more likely to see two diagonals, two groups of five, and many groups of two? If you are a sudoku player, you’ll recognise that these groupings in the puzzle are irrelevant to the game.

If you like, you can print the puzzles at the top, and give them to different sudoku players. Which puzzle is faster to complete?

Interested in gestalt principles? I’ve blogged about the use of gestalt principles before.

Auto-correct a touch-screen problem

For the past few months, I’ve been taking an average of 1.6 flights per week on commercial airplanes. Most of these offered seatback entertainment, so I could watch the TV show or movie of my choice, or listen to satellite radio while reading. Touch-screen controls are easy to use because they let me touch—or tap—the item or the control that I want. By using the touch screen, I can select a program, adjust the volume, skip the next song, and so on.

One thing I’ve noticed is that about ¼ of seatback touch screens are poorly registered. By registration I mean that the system and the user agree on where the user is tapping or touching the screen:

An illustration of registration

I recorded a video of two common tasks for a seatback entertainment system: selecting the language and adjusting the volume. As you can see, the registration is off, so I initially get the French interface instead of the English, and I must press an unrelated button to adjust the sound:

The registration error is significant. My fingertip tapped about 2 cm left of the centre of the EN button. The larger the registration error, the harder to tap a small target—as was the case with the volume controls in the video, above, where I appear to be tapping the Fast-Forward button. On more than one flight I have unintentionally increased the sound to painful levels while attempting to lower the volume!

A system such as this could be made to detect and auto-correct poor registration. If we assume that repeat taps on a blank location indicates poor registration, the software could:

  1. After several repeat taps, select the nearest target—a reasonable guess—even if it is a centimetre or two away from the user’s tap.
  2. Ask the user to confirm the guess. “Did you mean [this one]?”
  3. If the user confirms, calculate the amount by which to correct the registration, and then fix the registration error.

This solution requires a screen—perhaps the start screen—whose choices are spaced far apart, so the system can detect when the user appears to be tapping a blank space:

Tapping a blank space (at right)

If user testing were to show that auto-correction needs human involvement, after calculating the registration error, the system could ask the user to check the corrected registration. For example:

Confirming that the registration is correct
Are you there? Please tap the green circle.

I haven’t done any testing of this idea, nor have I given this much thought, so I’m certain there are many more and better ways to auto-correct a registration problem on a touch screen. I merely wanted to identify one possible solution in order to get to the next point: the need to consider the business drivers when deciding to address (or deciding not to address) a usability problem.

Everything costs money

Fixing this problem—it’s a real problem, you’ve seen the video—would cost money. If the following can be quantified and evaluated within a framework of passenger-experience goals, there may be a convincing business case:

  • Not every passenger can work around a registration problem. Those who cannot would be unable to use the entertainment system. When everyone else gets a movie, how does the passenger with a failing system feel?
  • If a failed entertainment system is perceived as a negative experience, will passengers blame the touch-screen/software manufacturer or blame the airline? I’m sure you can imagine the complaint: “I sat there for hours without a movie! It’s the airline’s fault.” What’s the likelihood that this will cause churn (passenger switches to another brand next time)?
  • Based on the screens I’ve seen, some frustrated passengers must use hard objects that scratch and even gouge the touch screen. Are they trying to force the screen to understand what they want? Are they vandalising the screen? What’s the cost of replacing a damaged or vandalised screen?
  • A scratched screen is like graffiti. It affects every subsequent passenger in that seat. Do vandalised screens affect the airline’s goal of attaining a particular passenger rating for perceived quality or aesthetic experience?
  • The in-flight entertainment system was implicated in a catastrophic Swiss Air crash near Peggy’s Cove about a decade ago. Would a fix to the touch-screen registration problem incur prohibitive safety-testing costs?

Leaner, more agile

This week, I’m attending a few days of training in agile software development, in an Innovel course titled Lean, Agile and Scrum for Project Managers and IT Leadership.

My first exposure to agile was in Desiree Sy‘s 2005 presentation, Strategy and Tactics for Agile Design: A design case study, to the Usability Professionals Association (UPA) annual conference in Montreal, Canada. It was a popular presentation then, and UPA-conference attendees continue to be interested in agile methods now. This year, at the UPA conference in Portland, USA, a roomful of usability analysts and user-experience practitioners discussed the challenges that agile methods present to their practice. One of the panellists told the room: “Agile is a response to the classic development problem: delivering the wrong product, too late.” There was lots of uncomfortable laugher at this. Then came the second, thought-provoking sentence: “Agile shines a light on the rest of us, since we are now on the critical path.” Wow! So it’s no longer developers, but designers, usability analysts, etc, who are holding up the schedule?

An agile loadDuring this week’s training, I’m learning lots while looking for one thing in particular: how to ensure agile methods accommodate non-developer activities, from market-facing product management activities, to generative product design, to early prototype testing, to usability testing, and so on.

I’m starting to suspect that when agile methods “don’t work” for non-developers, it’s because the process is wagging the dog (or that its “rules” are being applied dogmatically). I think I’m hearing that agile isn’t a set of fixed rules—so not a religion—but a sensible and flexible method that team members can adapt to their specific project and product.