Gestalt principles hindered my sudoku performance

Last week, while waiting for friends, I picked up a community newspaper in hopes of finding a puzzle to help me pass the time. I found a sudoku puzzle.

A sudoku puzzle consists of nine 3×3 squares, sprinkled with a few starter numbers. The player must fill in all the blanks by referring to the numbers that are already filled. A number can only occur once in each row of 9, each column of 9, and each 3×3 square.

I regularly complete difficult sudoku puzzles, but this easy one—more starter numbers makes the puzzle easier—was taking much longer than I expected.

I soon realised that my slow performance was due to a design decision by the graphic artist!

In the original puzzle, shown at left, the graphic designer used  shading  for all the starter numbers. In my reformatted version, on the right, I used shading to separate the 3×3 squares. Both puzzles also use thicker lines to separate the 3×3 squares.

The shading for starter numbers, on the left, is unfortunate because it interferes with the player’s perception of the nine 3×3 squares. Instead, players perceive groups of numbers (in diagonals, in sets of two, and sets of five).

I assume the designer’s intention was to help identify the starter numbers. Regardless of the designer’s intention, the human brain processes the shading just as it processes all visual information: according to rules that cognitive psychologists call gestalt principles. A sudoku player’s brain—any human brain—will first perceive the shaded boxes as groups or sets.

In sudoku, the grouping on the left is actually meaningless—and counterproductive. However, since the brain applies gestalt principles rather involuntarily and at a low level, the grouping cannot easily be ignored. The player must make a deliberate cognitive effort to ignore the disruptive visual signal of the original shading. This extra effort slows the player’s time-on-task performance.

You can check your own perception by comparing how readily you see diagonals and groups in both puzzles above. On the left, are you more likely to see two diagonals, two groups of five, and many groups of two? If you are a sudoku player, you’ll recognise that these groupings in the puzzle are irrelevant to the game.

If you like, you can print the puzzles at the top, and give them to different sudoku players. Which puzzle is faster to complete?

Interested in gestalt principles? I’ve blogged about the use of gestalt principles before.

Design better online-video chatting

Last year I worked with a team most of whose members were on a different continent. Since my job as a usability analyst and interaction designer often requires me to influence, motivate, and give feedback about work already completed, I quickly adopted online video chat in order to see the non-verbal communication cues of my teammates.

In the course of my work, I spent many hours chatting online with team members in Australia, India, and Canada. I experimented with camera locations and different video software. I also read about the research of David Nguyen and John Canny in Face-to-Face: Empathy Effects of Video Framing. The researchers explain how the right use of cameras makes an online experience as good as a face-to-face experience. And I combined this with research presented by Byron Reeves and Clifford Nass in The Media Equation: How People Treat Computers, Television and New Media Like Real People and Places. The authors explain how, in many ways, the human brain cannot distinguish between an online experience and a live, in-person experience.

I realised that it’s not just about how I communicate with my team members. As an interaction designer, I can improve the user experience of online video chat and online video calls—for example, in live Support calls—by considering:

• What is needed to give the illusion of eye contact?

Since people aren’t in the same space, eye contact isn’t real, but eye contact can be simulated, with all the benefits that ensue from actual eye contact.

• How do we minimise the false non-verbal cues that online experiences can introduce?

Poor camera position creates cues that aren’t really there, but the viewer’s brain still processes and reacts to them. False cues can convey boredom, submissiveness, disrespect, and so on.

• What exactly should the video include in its frame?

To get results that are equivalent to a face-to-face meeting, what’s in the frame is critical. For live online video calls, the common heads-only frame is undesirable.

Since a lot of the above information is best conveyed visually, here’s a movie to explain it:

Ethics of interaction design: influencing user choices

The more choices people have, the more likely they’ll choose something utilitarian over something hedonistic.

In an experiment by Aner Sela, Jonah Berger, and Wendy Liu, 20% of 121 participants chose low-fat ice cream when given a simple choice of two, but 37% chose low-fat ice cream when given a choice of ten. In this case, low-fat is seen as more utilitarian.

You’re probably not in ice-cream retail, so you may be interested to know that this finding also holds for hardware choices. When choosing one item from a selection of printers and MP3 players, the number of choices also influences what participants will choose. Given a simpler selection, two printers and two MP3 players, participants chose the MP3 player by about 3:1. However, just as an increase in ice-cream choices resulted in more utilitarian choices, so did an increase in the number of printers or MP3 players increased. When either the number of printers or the number of MP3 players increased to six (plus two of the other), the printers to MP3 players dropped 1:1. And, yes, in this experiment, the participants regarded printers as more utilitarian, and MP3 players as more hedonistic or fun.

But it’s never that simple, because human brains can easily be manipulated.

The same researchers, in a further study, confirmed that people who earlier made a virtuous or selfless choice can more easily justify a subsequent hedonistic choice.

If you ask visitors to an e-commerce web site to choose which charity should receive a portion of the site’s profits, the act of choosing between charity A and charity B probably increases the likelihood of a hedonistic subsequent choice.

You can combine all of this with other research findings. For example, when given a list of choices with the prices are in descending order (the most expensive item listed first), people are willing to consider spending 19% more, according to Cai Shun and Yunjie (Calvin) Xu.

Imagine the power of persuasion, or the influence, that an informed interaction designer can have on users, online customers, voters, and so on.

Clearly, there are ethical considerations. And the industry is starting to recognise this. For the first time this year, at the UPA 2009 conference in Portland, I saw conference presenters discussing ethics in interaction design. I’m sure the discussion is only beginning.

Cognitive psych in poll design

The WordPress community recently ran a poll. Users were asked to choose one of 11 visual designs. The leading design got only 18% of the vote, which gives rise to such questions as:

• Is this a meaningful win? The leader only barely beat the next three designs, and 82% voted for other designs.

I don’t know about the 18% versus 82%. I do wonder whether some of the entries triggered a cognitive process in voters that caused them to pay less attention to the other designs, which may bring the leading design’s razor-thin lead into question. This cognitive process—known as the “ugly option”—is used successfully by designers as they deliberately apply cognitive psychology to entice users to act. I’ll explain why, below, but I first want to explain my motivation for this blog post.

I’m using this WordPress poll as a jumping-off point to discuss the difficulty of survey design. I’m not commenting on the merit of the designs. (I never saw the designs up close.) And I’m certainly not claiming that people involved in the poll used cognitive psych to affect the poll’s outcome. Instead, in this blog, I’m discussing what I know about cognitive psychology as it applies to the design of surveys such as this recent WordPress.org poll.

Survey design affects user responses

If you’ve heard of the controversial Florida butterfly ballot in the USA’s presidential election in 2000, then you know ballot design—survey design—can affect the outcome. I live outside the USA, but as a certified usability analyst I regularly come across this topic in industry publications; since that infamous election, usability analysts in the USA have been promoting more research and usability testing to ensure good ballot design. I imagine that the Florida butterfly ballot would have tested poorly in a formative usability study.

The recent WordPress poll, however, would likely have tested well in a usability study to determine whether WordPress users could successfully vote for their choice. The question I have is whether the entries themselves caused a cognitive bias in favour of some entries at the expense of others.

It seems that one entry was entered multiple times, as dark, medium, and light variations. This seems like a good idea: “Let’s ask voters which one is better.” Interestingly, the visual repetition—the similar images—may have an unintended effect if you add other designs into the mix. Cognitive science tells us people are more likely to select one of the similar ones. Consider this illustration:

More people choose the leftmost image. The brain’s tendency to look for patterns keeps it more interested in the two similar images. The brain’s tendency to avoid the “ugly option” means it’ll prefer the more beautiful one of the two. Research shows that symmetry correlates with beauty across cultures, so I manipulated the centre image in Photoshop to make it asymmetrical, or “uglier”.

The ugly-option rule applies to a choice between different bundles of goods (like magazine subscriptions with different perks), different prices (like the bottles on a restaurant wine list), and different appearances (like the photos, above). It may have applied to the design images in the WordPress poll. The poll results published by WordPress.org lists the intentional variations in the table of results:

• DR1: Fluency style, dark
• DR2: Fluency style, medium
• DR3: Fluency style, light

In addition to these three, which placed 1st, 4th, and 6th overall, it’s possible there were other sets of variations, because other entries may have resembled each other, too.

As a usability analyst and user researcher, I find this fascinating. Does the ugly-option rule hold true when there are 11 options? Was the dark-medium-light variation sufficient to qualify one of the three as ugly? Did the leading design win because it was part of a set that included an ugly option? And, among the 11 entries, how many sets were there?

There are ways to test this.

Test whether the poll results differ in teh absence of an ugly-option set. A|B testing is useful for this. It involves giving half the users poll A with only one of the dark-medium-light variants, and the other half poll B with all three variants included. You can then can compare the two result sets. If there is a significant difference, then some further combinations can be tested to see if other possible explanations can be ruled out.

For more about the ugly option and other ways to make your designs persuasive, I recommend watching Kath Straub and Spencer Gerrol in the HFI webcast, The Science of Persuasive Design: Convincing is converting, with video and slides. There’s also an audio-only podcast and an accompanying white paper.