The right mental model makes software easier to use

Recently, a client told me we needed to redesign the main data-entry form of their company’s flagship product. Customers said they didn’t like the form in our client’s SaaS or cloud-based software. After extra training, customers still felt apprehensive and intimidated by its complexity.

The online form was built years ago, without a designer, as was typical of many start-ups. Within a few years, iPhone and iPad showed people that software can be simpler, and that’s what our client’s customers wanted, too. So we were tasked with simplifying the form.

To simplify, we changed the mental model

And we succeeded. By changing the mental model—the way users believe the data-entry form to work—we managed to:

  • make data entry feel simple and easy.
  • reduce inaccurate data and increase data quality.
  • help users discover existing features that they had not been using.
  • chop 60% off the data-entry time.

These improvements in user performance and user perception are based on GOMS calculations, feedback from the client’s customers, and self-reports by participants in two usability studies. All these gains came from changing the mental model, and adjusting the user interface to clearly reflect the new mental model.

What’s a mental model?

A mental model is a representation of how something works, which helps shape your approach to doing tasks. For example, you’re probably familiar with these mental models:

  • Fast-food restaurant. You arrive, order your food, pay, receive your food, sit down, then eat.
  • Fine-dining restaurant. You arrive, sit down, order your food, receive your food, eat, then pay.

The appearance of the restaurant helps you recognise what to do, and in what order.

Similarly, a mental model can helps you complete your online task—especially if it closely resembles a real-world experiences. For example:

  • Online shopping. You visit a store website, look at products, put some products in your shopping cart, choose delivery, and then pay. Later, you may track the delivery online, and receive the product in person.

To understand how a mental model affected the users, in this case study, let’s first look at how the product was, and then how it now is.

The original mental model: Convoluted

We’re not actually sure what to call the old mental model, but we can describe the user’s task.

The user’s task is to enter the details of each communication, so they can be tracked and reported to an industry regulator. In the original workflow, you would do this:

  1. Click a button to start a new communication. In the main form, enter what was said, as well as the communication’s date, title, and whether it is confidential.
    Step 1: enter part of the information and save the communication
    You cannot enter all the information yet, before saving. For example, you cannot enter the names of people involved in the communication. That’s an obvious part of any communication—but it cannot be entered in the main format all!
    Also, when saving, if you forgot any required information, you’ll get an error messages.
    After you save, the form cannot close because there is more work to do.
  2. In a series of separate forms, select the names of people and topic tags involved in the communication.
    Step 2: link other records to the communication
    The information from each secondary form does not appear on the main form. Instead, the main form increments a counter. For example, if you link two people, the counter says 2 contact names.
    The workflow is even more complicated if the person involved in the communication is representing an organization, because then you need to select both, and link them together, with an additional pop-up form.
    Eventually, you’ll convince yourself that all the required contact names, staff names, topics, and other records are linked and counted on the main form.
  3. Save the main form again. After that, you can close the main form.
    Step 3: Save the communication a second time
    After closing the main form, we observed users doing an additional step. Most users navigate to the list of communications to confirm that the new communication is listed. This compulsion to check reflects the uncertainty that the convoluted workflow creates.

What was the original mental model called?

It was a struggle to describe the original mental model in familiar, real-world terms. The user must fill in a series of complex forms, and they need to juggle back and forth between multiple forms in order to transfer numbers to the main form. This might be the conceptual equivalent to preparing and filing income taxes—and just as unpleasant to do.
We had to change the mental model to one that was clearer and familiar.

Here’s a mock-up of the original form (left) and new form:

The form, before and after

The new mental model: “Got gossip? Fill me in”

The redesigned interface combines two simple mental models that work well for the clerks who do most of the data entry. The form supports keyboard navigation, and can be pinned, so it is always open, ready to receive the next new communication.

  • Fill in a form. This is a mental model most people understand, and—provided the information they need to fill in is readily available—represents an easy task.
  • Gossip. During the data-entry stage, a typical communication record now resembles gossip, a mental model that’s readily understood, because humans tend to like stories: “Who said what about whom, how, where, when—and what’s our view of it?” Eliciting a story adds a bit of interest to a boring data-entry task.
    Of course, the gossip model is only suited for data entry. When reporting, the user interface groups, sorts, and presents the information as professional reports.

With the new mental model in mind, we carefully redesigned the data-entry form.

Now, users only click Save once. Fill in all the details at once; no more double saving. Behind the scenes, the same two-stage saving still occurs, but there’s no need to reflect this implementation in the workflow or user interface.

Save only once

Now users select items from lists. No more linking records. Behind the scenes, the linking of records still occurs, but there’s no need to mention that these are database records.

Just select the items—don't link
To reduce the number of controls, some types of items are now combined in one list, such as the different groups, individuals, and contacts involved in a communication.

Now, users see fewer fields. No more lengthy form to fill—or so it seems. When the form appears, it is shorter. Additional fields—the ones less used—are still available if the user clicks Show all, which lets users control the complexity on their screen.

Show All, for progressive disclosure

Now, the labels support the mental model. The labels help connect the information into a story—the “gossip” of the new mental model.

The labels stitch together the story
Existing users who tested the new labels did not like the changed labels but were able to perform their tasks successfully, so we left the labels as designed. We will re-assess the labels with new users, after a suitable period, to confirm we got the mental model right.

Show the mental model, hide the inner workings

Developers write programs that work. Developers who are proud of a particular solution may not understand why that solution should be hidden from users. But if the solution is complex, a good mental model keeps that solution out of sight, hidden behind the user interface.

It’s the job of usability analysts, user-experience designers, and software developers to work in tandem. Together, we identify suitable mental models and then clearly show these mental models through the user interface—even if it’s not an accurate reflection of what’s happening in the background.

The gap between the mental model—what the user thinks is happening—and the implementation model—what the developer built—is not a misrepresentation or an inaccuracy. It’s an additional layer that ensures our software products are more usable, users are more efficient, training costs are lower, support calls are fewer, and so on—all of which are legitimate business drivers.

Modernist design: Beyond flat and simple

In recent years the big players in software have adopted modernist design for their user interfaces. With this redesign, digital comes of age, with a look and feel that’s no longer bound by last century’s conventions or by those new to computing. A modernist user interface focuses people on their current task, supports fast-paced use, and embraces the fact that the interfaces are digital. The intent is to help people to learn and to use software more easily—and you’ve no doubt seen and used modernist interfaces, especially on your phones and tablets, in Google products, and on Windows 8 and later.

It’s easy to reduce modernism to two guidelines: put less on the screen, and make what’s still there look flatter. While modernist software works well, blindly applying those two guidelines without understanding the underlying principles can lead to puzzling and inconsistent experiences. In some cases—including in products by Apple and Microsoft—fewer items and less visual detail on screen has resulted in the removal or omission of the necessary cues that separate content from controls, the cues that allow people to learn and use the software effectively.

Fewer elements on the screen

Modernism calls for a simpler interface. With fewer choices on the screen at any given time, people can work faster and people are more likely to choose what they need—provided the right elements are present at the right time. To get this right requires an excellent understanding of the tasks customers need to do, and when. Research on customers before the design stage is key, and testing with customers during the design stage is key. Simply putting less on the screen without testing how the interface performs in the hands of customers is a gamble.

Visually simpler elements on the screen

Modernism calls for an interface that is stripped of distracting ornamentation. With less visual noise on the screen, the interface is less demanding and—again—easier and faster to use. Simply removing extra lines, extra colours, and extra words without testing how the interface performs in the hands of users is a gamble. Here’s why.

In the illustration, example 1 shows a button that clearly has affordance. On screen, the button looks like a 3D object that we could press in the real world. The button’s gradient fill and shadow make it appear 3D. Most people know this 3D appearance from earlier versions of Windows and from the first editions of iPhone.

Examples of visual affordance

In example 2, the 3D appearance and shadow are removed. Most people first noticed this flattened look for buttons in Google Chrome and for tiles in Windows 8. With most of the affordances stripped away, in some contexts people have trouble recognising that these boxes are buttons to click or tap, or an object to drag or swipe.

It is possible to strip away the ornamentation without going too far. In example 2, the Search button (far right) has a slight gradient, slightly rounded corners, and a slight shadow on roll-over, all of which subtly help identify this element as a button. This button is from Google Gmail’s search box.

Example 3 shows “buttons” that have the background rectangle removed, leaving only text. For many years, blue text —especially when underlined—was used to identify a hyperlink. Most people know this link appearance from the Internet’s first two decades.

Examples of visual affordance

In example 4, the colour and underline are also removed. You may have noticed these plain, modernist links in mobile devices such as Windows Phone and applications such as Microsoft Zune. With the colour stripped away, in some contexts people have trouble recognising which text is a link to click, tap, or swipe.

It is possible to strip away the background colour and text decoration and still provide cues that people can interact with this text. For example, in the illustration, below, the row of links extends off the right edge of the screen to suggest they can be swiped. Extending off the screen is easy to do, for example by enlarging the text.

Swiping to see more

Example 5 shows two icons that have lots of realistic detail. This resemblance to physical objects is called skeuomorphism. For many years, skeuomorphism increased as improvements to computer screens allowed more colour, better display resolution, and thus more realism. During this time, each icon typically also had a label to help make the icon’s meaning clear.

Examples of visual affordance

In example 6, the skeuomorphism and labels have disappeared, or are hidden. Most people first noticed the flattened, modernist icons in Google apps and in Microsoft Office 2013. In some contexts, people don’t recognise the icons without their labels, and either don’t know how to or don’t think to reveal the labels.

It is possible to see the labels on demand. For example, Windows Phone provides an icon to tap that reveals the labels, as illustrated below.

New cues for digital interfaces

The ellipsis (the “…”) is a cue to users that “there’s more.” This cue is necessary—so it hasn’t been stripped away in this modernist design. This is also an example of a cue that’s natively digital. It’s not possible in the physical world to show and hide content because paper is static, not interactive.

On earlier iPhones and iPads, some apps would use skeuomorphic detail to cue people about how an app functioned. For example, an image of spiral-coil binding (below, left) signals that there may be more content on another page, and page-curl transitions that mimic a turning page reinforce this as the user navigates from screen to screen. In contrast, the more modernist design (below right) uses a cue that’s only possible in the digital world. The screen uses content peeking (look closely at the right edge) to signal that there is more content on another page.

Skeuomorphism

Some design challenges, as the content-peeking example shows, have excellent solutions that comply with modernist design principles: simple, less on the screen, and natively digital. Other design challenges have yet to be solved.

Natively digital

One example of an unsolved challenge is the Windows Phone calculator, whose functions change when the device is rotated. Rotating the device causes the calculator to switch from standard calculator to scientific calculator if rotated clockwise, and from standard calculate to binary-and-hexadecimal calculator if rotated counter clockwise. This exemplifies what it means to be truly digital—in the real world, a calculator on your desk cannot change its buttons, no matter how you rotate it.

Windows Phone calculator

However, the current design fails because there is nothing in the interface to suggest that these functional changes will occur, so there’s no reason to rotate the screen in order to discover them. This design challenge calls for a solution—a visible button or icon or other explicit path—that clearly toggles through the calculator’s three states, for example by forcing the screen orientation to change for at least a few seconds, and temporarily ignoring the device’s current physical orientation, or perhaps locking it in place.

User interfaces that enable full product use

To be “usable” requires not only that people can successfully use software tools to perform their tasks, but also that they can locate and recognise those tools. This used to be easier to accomplish when there were fewer platforms.

The big players are changing the way user interfaces look and feel by applying modernist design to make the truly digital. At the same time, software is changing because the way people use it, where they use it, and with whom they use it is changing. As development teams continue to provide possibilities, they need to work with interface designers, usability analysts, and members of their target audience—the users or customers. Together, we can ensure the new interactions and simplified interfaces we design and build will tell people how the interface works and both help and remind people to find the functions and content they need. It’s not about making things flat and removing clutter; it’s about getting the interface right while applying modernist principles so people can use the product fully.

Postscript
I came across a video, titled Everything is a remix, that explains the “why” of modernism.

Would you have designed it that way?

In my day-to-day life, I often think about design problems as I encounter them. I find myself wondering about information that I don’t have—details that would help me solve the problem I noticed. And I wonder: faced with the same constraints, would I have come up with the same solution? Here’s one I encountered.

Passengers waiting to board a ferryLast week, some friends wanted to visit their family on an island. Where I live, people use ferries to get to travel between various islands and the mainland. At times, I’ve made the crossing on foot, by bus, or by passenger car. The choice might depend on the size of our group, how far we’re going on the other side, how much we want to spend, what time of day and year we’re travelling. On busy days the ferries fill to capacity, and traffic reports may announce “a 1- or 2-sailing wait” between points. From time to time the media discusses changes to ferry service, prices, and ridership. All in all, there are a lot of factors influencing the deceptively simple question: “When I get to the ferry, will there be space for me on board?” The question could also be: “Can I avoid waiting in line?”

The ferry company’s website answers this question in a seemingly fragmented way, and that got me thinking: why was the answer fragmented, and what user needs was the website’s current design meeting? The ferry company segments its audience by mode of travel. This segmentation is logical for an audience motivated by cost, because a ferry passenger on foot pays less than a ferry passenger in a car. But when other decision-making factors are more important than price—such as space availability—segmenting users by mode of travel might not be helpful.

Can I avoid waiting?

The friends I mentioned earlier had all the time in the world to get to their family on the island. But they didn’t want to wait in line for hours. Finding the answer to “is there space for us, or will we have to wait” is complicated because the answers seem to be organized by mode of travel on different pages of the website. Here’s a reproduction of one of the first “is there space for me” answers I found on the website:

Is there space on the ferry?

Given the question, the above screen may not be clear. What is deck space? And—look closely at the orange bar—how much deck space is available? Is it zero or 100%? Is a reservation the same thing as a ticket? Does everyone require a reservation to board?

Here’s another way to present the same information, this time making it clearer that a driver’s willingness to pay more may influence wait time:

No reserved spaces on the ferry

Now it’s clear that this information about availability only applies to vehicles that want a reservation. That means foot passengers, bus passengers, and cyclists still don’t have an answer to the “will we have to wait” question. From experience, frequent travellers already know part of the answer: passengers on foot almost never have to wait, but occasional travellers and tourists wouldn’t know this. And travellers with vehicles may wonder about alternatives, because leaving the car on shore and boarding on foot could put them on an earlier ferry. The answer to “can we avoid waiting” may require a comparison of wait times for each mode of travel.

Here’s another way to present the information, this time listing more modes of travel:

Different types of space on the ferry

The above screen answers the “can we avoid waiting” question more clearly. In addition to providing greater certainty for some modes of travel, it also meets the (presumed) business need of generating revenue by selling reservations.

Design questions, but no answers

It’s easy to theoretically “solve” a design problem that we encounter, but there are always unknowns.

  • Is there really a design problem? How would we know?
  • Would this design have been technically possible?
  • Would this design have been affordable?
  • Would this design have met the needs of many users, or only a few?
  • Would this design have been ill received by customers or interested groups?
  • and so on….

So if you can’t know all the answers, why bother with the exercise? Because it’s what we do, in our line of work.

The trigger for this exercise

Here’s an excerpt of the screen that inspired this post.

Excerpt of the original screen

When a user interface is for using—not for understanding—a product

The purpose of a user interface is not to explain how a product works. Instead, the interface is to help people use the product. Here’s an idea: if someone can use your product without understanding how it works, that’s probably just fine.

What model does the user interface reflect?

Models are useful to help people make sense of ideas and things.

An implementation model is how engineers and software developers think of the thing they’re building. It helps them to understand the product’s inner workings, the sum of its software algorithms and physical components. For example, a car mechanic has an implementation model of combustion engines.

A mental model is how someone believes a product behaves when they interact with it. It helps them to understand how to use the product. For example, a typical car driver has an implementation model of pressing the gas pedal to go faster and pressing the brake to slow down. This mental model doesn’t reflect how the car is built—there are many parts between the gas pedal and its spinning tires that typical drivers don’t know about.

The implementation model and the mental model can be very similar. For example, the mental model of using a wood saw is that “The saw makes a cut when I drags it back and forth across the wood.” This overlaps with the implementation model. In addition to the back-and-forth user action, the implementation model also includes an understanding of how the saw’s two rows of cutting edges—one for the forward stroke and one for the backward stroke—help to cut the wood fibers, break the cut fibers loose, and then remove the fibers from the kerf, and whether the saw’s tooth shape is better for cutting fresh wood or dried wood.

The mental- and implementation models can overlap, or not

The implementation model and the mental model can also be very different. Let’s consider another example: getting off a public-transit bus. The mental model of opening the exit doors is that “When the bus stops, I give the doors a nudge and then the doors open fully.” The implementation model of the exit doors is that, once the bus stops and the driver enables the mechanism, the exit doors will open when a passenger triggers a sensor. Now consider this: if the sensor is a touch sensor then the passenger’s mental model of “nudging the door” is correct. But, in fact, the sensor is a photoelectric sensor—a beam of light—and the passenger’s mental model of “nudging the door” is incorrect.

To exit, break the photoelectric beam

Getting bus passengers to break the photoelectric beam was a real-life design challenge that was solved in different ways. In Calgary, public-transit buses use a large, complex sign on exit doors to present a mental model that’s somewhat consistent with the implementation model:

Signage explains the complex implementation modelTO Signage for a simpler mental modelOPEN THE DOOR

      1. WAIT FOR GREEN LIGHT
      2. WAVE HAND NEAR DOOR HERE

In Vancouver, public-transit buses use a large, simple sign on exit doors to present a mental model that’s inconsistent with the implementation model:

TOUCH HERE ▓ TO OPEN

In fact, touch does not open the exit doors at all—not on the Vancouver buses or the Calgary buses I observed. Only when a passenger breaks the photoelectric beam will the doors open. In Calgary passengers are told to wave a hand near the door. A Calgary bus passenger might conclude that the exit door has a motion sensor (partly true) or a proximity sensor (not true).  In Vancouver passengers are told to touch a target, and the touch target is positioned so the passenger will break the photoelectric sensor beam when reaching for the target. A Vancouver bus passenger might conclude that the exit door has a touch sensor (not true).

Calgary bus passengers are more likely to guess correctly how the exit door actually works because the sign presents a mental model that partly overlaps the implementation model: the door detects hand-waving. But does that make it easier for someone without prior experience to exit the bus?

No, it’s harder.

It’s more difficult for a sign to get passengers to hold up a hand in the air in front of the door than it is to put a hand on the door. Here’s why: If you knew nothing about a door that you wanted to open outward, would you place a hand on the door and push? Or would you wave at it? From our lifelong experience with doors we know to push them open. Touching a door is more intuitive than waving at it, and that’s why “nudge the door” is a better mental model and thus an easier behaviour to elicit and train. The simpler mental model improves usability.

Rule of thumb for mental models

When understanding of a product’s inner workings are unnecessary, staying true to the implementation model risks increasing the complexity of the user interface. Instead, have the user interface, reflect a mental model that is simple, effective, and usable.

If you can relate the use of an object to a common experience or simple idea then do so—even if it doesn’t follow the implementation model. It is unnecessary for a system’s user interface to convey how the product was built. The user interface only needs to help users to succeed at their tasks.

No doubt there are cases where a lack of understanding of a product’s inner workings could cause danger to life and limb, or cause unintended destruction of property. In that case, the mental model needs to convey the danger or risk or, failing that, needs to overlap more with the implementation model.