Cognitive psych in poll design

The WordPress community recently ran a poll. Users were asked to choose one of 11 visual designs. The leading design got only 18% of the vote, which gives rise to such questions as:

  • Is this a meaningful win? The leader only barely beat the next three designs, and 82% voted for other designs.

WordPress pollI don’t know about the 18% versus 82%. I do wonder whether some of the entries triggered a cognitive process in voters that caused them to pay less attention to the other designs, which may bring the leading design’s razor-thin lead into question. This cognitive process—known as the “ugly option”—is used successfully by designers as they deliberately apply cognitive psychology to entice users to act. I’ll explain why, below, but I first want to explain my motivation for this blog post.

I’m using this WordPress poll as a jumping-off point to discuss the difficulty of survey design. I’m not commenting on the merit of the designs. (I never saw the designs up close.) And I’m certainly not claiming that people involved in the poll used cognitive psych to affect the poll’s outcome. Instead, in this blog, I’m discussing what I know about cognitive psychology as it applies to the design of surveys such as this recent poll.

Survey design affects user responses

If you’ve heard of the controversial Florida butterfly ballot in the USA’s presidential election in 2000, then you know ballot design—survey design—can affect the outcome. I live outside the USA, but as a certified usability analyst I regularly come across this topic in industry publications; since that infamous election, usability analysts in the USA have been promoting more research and usability testing to ensure good ballot design. I imagine that the Florida butterfly ballot would have tested poorly in a formative usability study.

The recent WordPress poll, however, would likely have tested well in a usability study to determine whether WordPress users could successfully vote for their choice. The question I have is whether the entries themselves caused a cognitive bias in favour of some entries at the expense of others.

It seems that one entry was entered multiple times, as dark, medium, and light variations. This seems like a good idea: “Let’s ask voters which one is better.” Interestingly, the visual repetition—the similar images—may have an unintended effect if you add other designs into the mix. Cognitive science tells us people are more likely to select one of the similar ones. Consider this illustration:

More people choose the leftmost image. The brain’s tendency to look for patterns keeps it more interested in the two similar images. The brain’s tendency to avoid the “ugly option” means it’ll prefer the more beautiful one of the two. Research shows that symmetry correlates with beauty across cultures, so I manipulated the centre image in Photoshop to make it asymmetrical, or “uglier”.

The ugly-option rule applies to a choice between different bundles of goods (like magazine subscriptions with different perks), different prices (like the bottles on a restaurant wine list), and different appearances (like the photos, above). It may have applied to the design images in the WordPress poll. The poll results published by lists the intentional variations in the table of results:

  • DR1: Fluency style, dark
  • DR2: Fluency style, medium
  • DR3: Fluency style, light

The variants scored 1st, 4th, and 6thIn addition to these three, which placed 1st, 4th, and 6th overall, it’s possible there were other sets of variations, because other entries may have resembled each other, too.

As a usability analyst and user researcher, I find this fascinating. Does the ugly-option rule hold true when there are 11 options? Was the dark-medium-light variation sufficient to qualify one of the three as ugly? Did the leading design win because it was part of a set that included an ugly option? And, among the 11 entries, how many sets were there?

There are ways to test this.

Test whether the poll results differ in teh absence of an ugly-option set. A|B testing is useful for this. It involves giving half the users poll A with only one of the dark-medium-light variants, and the other half poll B with all three variants included. You can then can compare the two result sets. If there is a significant difference, then some further combinations can be tested to see if other possible explanations can be ruled out.

For more about the ugly option and other ways to make your designs persuasive, I recommend watching Kath Straub and Spencer Gerrol in the HFI webcast, The Science of Persuasive Design: Convincing is converting, with video and slides. There’s also an audio-only podcast and an accompanying white paper.

Put the card in the slot

We know that human brains use patterns (or schemata) to figure out the world and decide what to do. This kind of cognitive activity takes place very quickly, which means we can react quickly to the world around us, as long as the pattern holds.

Here’s a pattern (or schema) that your brain may know: to put a card in a slot, use the narrow edge. If the card has a clipped corner, use that edge. Some examples:
This pattern is easy for your brain because of the physical cues—also known as affordance. The narrow edge + clipped corner say: “This side goes in.”

A pattern that’s harder for your brain to learn is the magnetic-stripe bank card. You have to think about which way the stripe goes. And it seems harder when you’re in a hurry, or when you feel less safe. Have you used an outdoor bank machine at night, with people hanging around?

Vancouver transit tickets are awful because they don’t follow the card-in-the-slot pattern correctly. As a passenger on the SkyTrain (overhead subway line), you must punch your ticket at the station entrance. The machines are placed so you must turn your back on the drug dealers and their customers who hang out in the subway. As with bank cards, these transit tickets fit four ways, but only one will punch the ticket, and it’s not the edge with the clipped corner. There’s a yellow arrow to assist, but while the arrow is clearly visible in daylight, it’s almost invisible in the yellow-tinged fluorescent lighting of most SkyTrain stations. Also, the yellow arrow must be face down, which is counterintuitive because then you cannot see it. In the photo (above, right), can you find the arrow on the ticket?

Did the designers consider how their ticket machines or bank cards would make passengers feelDesigning for emotion seems to have legs, these days, but it’s not a new idea.

Doing better. I wonder whether the designers of these systems (transit tickets, bank cards) considered all possible options. It’s a Five Sketches™ mantra: You get a better design when you first saturate the design space. This can include doing a competitor analysis to seek out other ideas. And there are other models for transit tickets. I’ve seen Paris subway tickets with the magnetic stripe in the middle, so passengers could insert the ticket face up or face down, frontwards or backwards. The more recent tag-on/tag-off technology used from London, UK, to Perth, WA, avoids the insert-your-card problem—though the overall experience may be worse, since failure to tag off means paying the highest possible fare.

As for bank cards, IBM’s designers must have modelled bank cards on credit cards, which had the magnetic stripe toward the top instead of in the middle. This doubles the customer’s chances of inserting the card incorrectly. An obvious question to have asked at the design stage: can we design a bank machine to read the card regardless of how it’s inserted?