Chip-card usability: Remove the card to fail

Card readerI went to a shop near my home, made a purchase, and tried to pay by using a chip card. Tapping the card on the machine didn’t work, so I had to insert the card and enter a code.

My first attempt failed, because I pulled my card out of the card reader too soon, before the transaction was finished. I should add that I removed my card when the machine apparently told me so.

The machine said: “REMOVE CARD”

And just as I pulled my card out, I noticed the other words: “PLEASE DO NOT”

Have you done this, too…?

Continue reading “Chip-card usability: Remove the card to fail”

Natural mapping of light switches

I recently moved into a home where the light switches are all wrong. I was able to fix one problem, and the rest is a daily reminder that usability doesn’t just happen—it takes planning.

Poorly mapped light switches.
The switch on the left operates a lamp on the right, and vice versa. This is not an example of natural mapping.

On one wall, a pair of light switches was poorly mapped. The left switch operated a lamp to the right, and the right switch operated a lamp to the left. The previous resident’s solution to this confusing mapping was to put a red dot on one of the switches, presumably as a reminder. I put up with that for about three days. Continue reading “Natural mapping of light switches”

Train yourself in frustration, confusion, and inefficiency

For professional reasons, I like to mess around with software. It’s a form of training, because some of the messing around leads to frustration, confusion, and inefficiency. And that’s good.

My hope is that my experiences will help me to better understand what I put various groups of software users through when they use the software I helped design and build.

An easy way to mess around is by changing default settings. For example, my iTunes isn’t set to English. This helps me understand the experience of users who learned one language at home as children and now use another language at work as adults. It’s not just beneficial to experience the initial pain of memorising where to click (as I become a rote user in a GUI I cannot read), but also the additional moments of frustration when I must do something new—an occasional task whose command vector I haven’t memorised.

Relating to the language challenges that some users face

Another easy way to mess around is to switch between iMac and Windows computers. It’s not just the little differences, such as whether the Minimise/Maximise/Close buttons are on the left or right sides of the title bar, or whether that big key on the keyboard is labelled Enter or Return.

Switching between operating systemsIt’s also the experience of inefficiency. It’s knowing you could work faster, if only the tool weren’t in your way. This also applies to successive versions of “the same” operating sytem. This is the frustration of the transfer user.

It’s noticing how completely arbitrary many design standards are—how arbitrarily different between operating systems—such as the End key that either does or doesn’t move the insertion point to the end of the line.

Another easy way to mess around is to run applications in a browser that’s not supported. I do it for tasks that matter, such as making my travel bookings.

All this occasional messing around is about training myself. The experiences I get from this broaden the range of details I ask developers to think about as they convert designs into code and into pleasing, productive user experiences.

In a separate IxDA discussion thread, a few people reacted to this blog post:

  • Try a Dvorak keyboard instead of a Qwerty keyboard (Johnathan Berger).
  • Watch children’s first use of a design (Brandon E.B. Ward).
  • Use only the keyboard, not the mouse (CK Vijay Bhaskar).
  • Sit in at the Customer Support desk for a day (Adrian Howard).
  • Search Twitter to find out how people feel about a product (Paul Bryan).

See also the comment(s) below, directly in this blog.

Unreliability of self-reported user data

Many people are bad at estimating how often and how long they’re on the phone. Interestingly, you can predict who will overestimate and who will underestimate their phone usage, according to the 2009 study, “Factors influencing self-report of mobile phone use” by Dr Lada Timotijevic et al. For this study, a self-reported estimate is considered  accurate if it is within 10% of the actual number:

Defining 'accuracy'

Underestimated Accurate Overestimated
Number of phone calls (number of people) (number of people) (number of people)
High user 71% 10% 19%
Medium user 53% 21% 26%
Low user 33% 16% 51%
Duration of phone calls
High user 41% 20% 39%
Medium user 27% 17% 56%
Low user 13% 6% 81%

If people are bad at estimating their phone use, does this mean that people are bad at all self-reporting tasks?

Not surprisingly, it depends how long it’s been since the event they’re trying to remember. It also depends on other factors. Here are some factoids that should convince you to be careful with self-reported user data that you collect.

What’s the problem with self-reported data?

On questions that ask respondents to remember and count specific events, people frequently have trouble because their ability to recall is limited. Instead of answering “I’m not sure,” people typically use partial information from memory to construct or infer a number. In 1987, N.M. Bradburn et al found that U.S. respondents to various surveys had trouble answering such questions as:

  • During the last 2 weeks, on days when you drank liquor, about how many drinks did you have?
  • During the past 12 months, how many visits did you make to a dentist?
  • When did you last work at a full-time job?

To complicate matters, not all self-report data is suspect. Can you predict which data is likely to be accurate or inaccurate?

  • Self-reported Madagascar crayfish harvesting—quantities, effort, and harvesting locations—collected in interviews was shown reliable (2008, Julia P. G. Jones et al).
  • Self-reported eating behaviour by people with binge-eating disorders was shown “acceptably” reliable, especially for bulimic episodes (2001, Carlos M. Grilo et al).
  • Self-reported condom use was shown accurate over the medium term, but not in the short term or long term (1995, James Jaccard et al).
  • Self-reported numbers of sex partners were underreported and sexual experiences and condom use overreported a year later when compared to self-reported data at the time (2002, Maryanne Garry et al).
  • Self-reported questions about family background, such as father’s employment, result in “seriously biased” research findings in studies of social mobility in The Netherlands—by as much as 41% (2008, Jannes Vries and Paul M. Graaf).
  • Participation in a weekly worship service is overreported in U.S. polls. Polls say 40% but attendance data says 22% (2005, C. Kirk Hadaway and Penny Long Marler).
Can you improve self-reported data that you collect?

Yes, you can. Consider these:

  • Decomposition into categories. Estimates of credit-card spending gets more accurate if respondents are asked for separate estimates of their expenditures on, say, entertainment, clothing, travel, and so on (2002, J. Srivastava and P. Raghubir ).
  • For your quantitative or qualitative usability research or other user research, it’s easy to write your survey questions or your lines of inquiry so they ask for data in a decomposited form.

  • Real-time data collection. Collecting self-reported real-time data from patients in their natural environments “holds considerable promise” for reducing bias (2002, Michael R. Hufford and Saul Shiffman).
    Collecting real-time self-report data
  • This finding is from 2002. Social-media tools and handheld devices now make real-time data collection more affordable and less unnatural. For example, use text messages or Twitter to send reminders and receive immediate direct/private responses.

  • Fuzzy set collection methods. Fuzzy-set representations provide a more complete and detailed description of what participants recall about past drug use (2003, Georg E. Matt et al).
  • If you’re afraid of math but want to get into fuzzy sets, try a textbook (for example, Fuzzy set social science by Charles Ragin), audit a fuzzy-math course for social sciences (auditing is a low-stakes way to get things explained), or hire a tutor in math or sociology/anthropology to teach it to you.

Also, when there’s a lot at stake, use multiple data sources to examine the extent of self-report response bias, and to determine whether it varies as a function of respondent characteristics or assessment timing (2003, Frances K. Del Boca and Jack Darkes). Remember that your qualitative research is also one of those data sources.

Up and down the TV channels

My television lets me step through the channels. To do this, I use the remote control’s CH button. Similarly, my television lets me page through the list of programs, five channels at a time. To do this, I use the remote control’s PG button. In fact, it’s one button for the stepping and paging functions.

My remote control

The programs in the list are shown in numeric order, so smaller numbers are higher in the list. Pressing “+” will page the list up, so “+” leads to smaller numbers. Similarly, pressing “–” will page the list down, to larger numbers. This follows the same mental model as scrolling in a computer window, including the one you’re reading in, now.

Scrolling up

In contrast, when I’m watching one channel (full-screen, so with the program guide hidden), the same two buttons have the inverse effect. The “+” button increases the number of the channel (which is like moving down in the programs list, not up). This follows the same mental model as a spin control in many computer programs.

Spinning up

Imagine using the one button in succession for the two functions:

first as PG to page through the menu
  and then, after selecting a channel,
as CH to step through the channels.

I see in this an excellent problem for a practicum student or as a class assignment that’s combining user research, design, GUI, and handheld devices. Possible questions:

  • What research would confirm that this is, in fact, a problem?
  • If you confirm the problem, is it entirely on the hardware side? How many people are affected?
  • Is there a business case to fix the problem?
  • How could you fix it? What design methods and processes would you use? Why?
  • How could you demonstrate that your design fixes the problem? Is there a lower-cost way to validate the design, and, if so, what are the trade-offs?

Ethics of interaction design: influencing user choices

The more choices people have, the more likely they’ll choose something utilitarian over something hedonistic.

ice-cream-cones

In an experiment by Aner Sela, Jonah Berger, and Wendy Liu, 20% of 121 participants chose low-fat ice cream when given a simple choice of two, but 37% chose low-fat ice cream when given a choice of ten. In this case, low-fat is seen as more utilitarian.

You’re probably not in ice-cream retail, so you may be interested to know that this finding also holds for hardware choices. When choosing one item from a selection of printers and MP3 players, the number of choices also influences what participants will choose. Given a simpler selection, two printers and two MP3 players, participants chose the MP3 player by about 3:1. However, just as an increase in ice-cream choices resulted in more utilitarian choices, so did an increase in the number of printers or MP3 players increased. When either the number of printers or the number of MP3 players increased to six (plus two of the other), the printers to MP3 players dropped 1:1. And, yes, in this experiment, the participants regarded printers as more utilitarian, and MP3 players as more hedonistic or fun.

printers-and-mp3-players

 But it’s never that simple, because human brains can easily be manipulated.

The same researchers, in a further study, confirmed that people who earlier made a virtuous or selfless choice can more easily justify a subsequent hedonistic choice.

Offering users a virtuous choiceIf you ask visitors to an e-commerce web site to choose which charity should receive a portion of the site’s profits, the act of choosing between charity A and charity B probably increases the likelihood of a hedonistic subsequent choice.

You can combine all of this with other research findings. For example, when given a list of choices with the prices are in descending order (the most expensive item listed first), people are willing to consider spending 19% more, according to Cai Shun and Yunjie (Calvin) Xu.

Imagine the power of persuasion, or the influence, that an informed interaction designer can have on users, online customers, voters, and so on.

Clearly, there are ethical considerations. And the industry is starting to recognise this. For the first time this year, at the UPA 2009 conference in Portland, I saw conference presenters discussing ethics in interaction design. I’m sure the discussion is only beginning.

Month and year: date enough?

Once again, I’ve learned that I am typical. I tend to format dates the same way as many other people do.  This excerpt from William Hudson’s date study report shows the most common formats:

Click to view the source and much more detail

The report has all sorts of tidbits for interaction designers about date formats, error trapping, leading zeros, and more.

One thing the study doesn’t discuss is when not to use dates.

I recently recommended, in an online customer-registration form, dropping the request for the customer’s complete birthday. On the site, the date will be used to compare the customer’s data to a comparable sample of the same age. The site will tell the user whether they’re below or above average.

Month and year

The point behind my recommendation is this: you can accomplish that with reasonable accuracy with only the month and year, or perhaps with only the year.

Unless your business or the application really needs to know a user’s exact date of birth, collecting that information online means you’re setting up your company, needlessly, by making its data a juicier target for hackers and identity thieves.

This sugar packet is a movie

Whether it’s ethnographic research, usability research, or marketing research, I’ve learned that the best insights aren’t always gleaned from scheduled research.

Here’s a photo of impromptu research, conducted by Betsy Weber, TechSmith’s product evangelist. I was her research subject. Betsy recorded me pushing sugar packets around a table as I explained how I’d like Camtasia to behave.

Jerome demos an idea to Betsy. Photo by Mastermaq

Betsy takes information like this from the field back to the Camtasia team. There’s no guarantee that my idea will influence future product development, but what this photo shows is that TechSmith listens to its users and customers.

The ongoing stream of research and information that Betsy provides ensures better design of products that will be relevant and satisfying for TechSmith customers down the line.