When a user interface is for using—not for understanding—a product

The purpose of a user interface is not to explain how a product works. Instead, the interface is to help people use the product. Here’s an idea: if someone can use your product without understanding how it works, that’s probably just fine.

What model does the user interface reflect?

Models are useful to help people make sense of ideas and things.

  • An implementation model is how engineers and software developers think of the thing they’re building. It helps them to understand the product’s inner workings, the sum of its software algorithms and physical components. For example, a car mechanic has an implementation model of combustion engines.
  • A mental model is how someone believes a product behaves when they interact with it. It helps them to understand how to use the product. For example, a typical car driver has a mental model of pressing the accelerator pedal to go faster and pressing the brake to slow down. This mental model doesn’t reflect how the car is built—there are many parts between the gas pedal and its spinning tires that typical drivers don’t know about.

The implementation model and the mental model can be very similar. For example, the mental model of using a wood saw is that “The saw makes a cut when I drags it back and forth across the wood.” This overlaps with the implementation model. In addition to the back-and-forth user action, the implementation model also includes an understanding of how the saw’s two rows of cutting edges—one for the forward stroke and one for the backward stroke—help to cut the wood fibers, break the cut fibers loose, and then remove the fibers from the kerf, and whether the saw’s tooth shape is better for cutting fresh wood or dried wood.

The mental- and implementation models can overlap, or not

The implementation model and the mental model can also be very different. Let’s consider another example: getting off a public-transit bus. The mental model of opening the exit doors is that “When the bus stops, I give the doors a nudge and then the doors open fully.” The implementation model of the exit doors is that, once the bus stops and the driver enables the mechanism, the exit doors will open when a passenger triggers a sensor. Now consider this: if the sensor is a touch sensor then the passenger’s mental model of “nudging the door” is correct. But if the sensor is a photoelectric sensor—a beam of light—then passenger’s mental model of “nudging the door” is incorrect.

To exit, break the photoelectric beam

Getting bus passengers to break the photoelectric beam was a real-life design challenge that was solved in different ways. In Calgary, public-transit buses use a large, complex sign on exit doors to present a mental model that’s somewhat consistent with the implementation model:

Signage explains the complex implementation modelTO Signage for a simpler mental modelOPEN THE DOOR

      1. WAIT FOR GREEN LIGHT
      2. WAVE HAND NEAR DOOR HERE

In Vancouver, public-transit buses use a large, simple sign on exit doors to present a mental model that’s inconsistent with the implementation model:

TOUCH HERE ▓ TO OPEN

In fact, touch does not open the exit doors at all—not on the Vancouver buses or the Calgary buses I observed. Only when a passenger breaks the photoelectric beam will the doors open. In Calgary passengers are told to wave a hand near the door. A Calgary bus passenger might conclude that the exit door has a motion sensor (partly true) or a proximity sensor (not true).  In Vancouver passengers are told to touch a target, and the touch target is positioned so the passenger will break the photoelectric sensor beam when reaching for the target. A Vancouver bus passenger might conclude that the exit door has a touch sensor (not true).

Calgary bus passengers are more likely to guess correctly how the exit door actually works because the sign presents a mental model that partly overlaps the implementation model: the door detects hand-waving. But does that make it easier for someone without prior experience to exit the bus?

No, it’s harder.

It’s more difficult for a sign to get passengers to hold up a hand in the air in front of the door than it is to put a hand on the door. Here’s why: If you knew nothing about a door that you wanted to open outward, would you place a hand on the door and push? Or would you wave at it? From our lifelong experience with doors we know to push them open. Touching a door is more intuitive than waving at it, and that’s why “nudge the door” is a better mental model and thus an easier behaviour to elicit and train. The simpler mental model improves usability.

Rule of thumb for mental models

When an understanding of a product’s inner workings is unnecessary, staying true to the implementation model risks increasing the complexity of the user interface. Instead, have the user interface reflect a mental model that is simple, effective, and usable.

If you can relate the use of an object to a common experience or simple idea then do so—even if it doesn’t follow the implementation model. It is unnecessary for a system’s user interface to convey how the product was built. The user interface only needs to help users to succeed at their tasks.

No doubt there are cases where a lack of understanding of a product’s inner workings could cause danger to life and limb, or cause unintended destruction of property. In that case, the mental model needs to convey the danger or risk or, failing that, needs to overlap more with the implementation model.

3 Replies to “When a user interface is for using—not for understanding—a product”

  1. Thanks for this, Jerome. I wish I’d had a copy of this years ago. It will be so useful to show why you don’t have to include everything in the interface (or the instructions). Especially, since as you state in your opening, it’s about the user’s behaviour. I think that our engineering friends believe that we’re lying to users when we don’t explain all the nitty-gritty details. This article shows them why it isn’t always necessary and, worse, has the potential to create confusion.

  2. Excellent point, though I think that the “wave in front of a beam” is more common now in bathrooms – for example, to get paper towels to dispense. And I appreciate that because I really don’t want to touch the door of a bus. (Do you know how many germs are on that thing?) But the location of the wave needs to be indicated, as you’ve pointed out, so that you realize where you need to position your hand. But then again, I’m not your average audience member. I work in technology, but in the real world, am not paying attention half the time, so I get to the door, and then randomly try things until it opens. By that time, I’ve forgotten which think I did that actually worked. So rinse, lather, repeat. Or I could end up like the elderly woman in the bathroom who thought that you had to “clap on” to get the water running. (Again, the beam problem.)

    Overall, though, I think you have a good point about just get it to work. Like swiping a credit card – you don’t know all the steps it goes through to authorize the transaction. You just need to know that the black bar or chip area has to make contact with the machine in a certain way for it to work.

  3. There are a number of really interesting points here. I enjoyed this post immensely…

    In terms of troubleshooting problems, it would be interesting to elaborate on the implementation model and mental model with the idea that there are different knowledge types (which, incidentally, all get subsumed under the heading “mental model” or “mental representation” in cognitive circles). As someone who has ridden Vancouver buses and has been trapped when the photoelectric sensor “didn’t work”, I believe that having knowledge of the implementation model and having it integrated with the mental model can help solve problems at these critical times (i.e., so people like me don’t have to frantically wave and panic when the doors don’t open).

    Knowledge types:
    declarative knowledge – tells us what a thing IS, what its inner workings are, how it’s composed, etc.
    procedural knowledge – tells us what to DO in terms of a sequence of events
    conditional knowledge – if/then statements, the type of knowledge we have that tells us what to do when certain conditions, features or situations arise

    Let’s use the car example first.
    press gas = go faster
    press brake = slow down
    The is procedural knowledge at its simplest. However, if it’s a snowy day in Vancouver, there is an intervening conditional knowledge variable that makes press gas = go faster not work as well. It’s more like, press gas = spin tires = go nowhere which often results in pressing the gas harder, spinning the tires harder and swearing. :) If there is conditional knowledge given that there is slippery snow on the ground, the driver resorts to the mental model/heuristic that they are used to using (press the gas harder – I’m sure we’ll go somewhere soon! It’s always worked before!). By throwing in some “implementation model” information or declarative knowledge about rear wheel drive, tire traction and generally the way cars behave in a variety of conditions, the user can use that knowledge to create a different strategy and break an unproductive loop (i.e., pressing the gas and spinning the tires hopelessly).

    The bus sign example is a nifty one because the instructions on Vancouver buses (TOUCH HERE) give riders simple, one step procedural knowledge without conditional knowledge OR declarative (implementation model) knowledge. But there is a risk, especially for people like me who wave their hands too fast, or slightly the wrong way and get trapped on the bus. That risk is that by not having SOME declarative knowledge of the way that photoelectric sensors work to open the doors, the user of the bus door (like the user of the car in snow) has no idea what to do when they TOUCH HERE and the doors don’t open. With some knowledge of the implementation model, the bus rider can say to herself something along the lines of, “Oh, this is a light sensor – maybe the light didn’t sense my hand. I will calm down and simply touch the TOUCH HERE spot again – the light will probably sense my hand THIS time.”

    I would say that having at least SOME knowledge of how things work (and the level of granularity on that front is a really cool problem) can help the user solve problems because it ties user activity to the nature of the object that they are trying to use or operate, especially in the context of intervening or unique variable like snow and poor hand position. :)

Comments are closed.