Jul 272014
 

Recently, a client told me we needed to redesign the main data-entry form of their company’s flagship product. Customers said they didn’t like the form in our client’s SaaS or cloud-based software. After extra training, customers still felt apprehensive and intimidated by its complexity.

The online form was built years ago, without a designer, as was typical of many start-ups. Within a few years, iPhone and iPad showed people that software can be simpler, and that’s what our client’s customers wanted, too. So we were tasked with simplifying the form.

To simplify, we changed the mental model

And we succeeded. By changing the mental model—the way users believe the data-entry form to work—we managed to:

  • make data entry feel simple and easy.
  • reduce inaccurate data and increase data quality.
  • help users discover existing features that they had not been using.
  • chop 60% off the data-entry time.

These improvements in user performance and user perception are based on GOMS calculations, feedback from the client’s customers, and self-reports by participants in two usability studies. All these gains came from changing the mental model, and adjusting the user interface to clearly reflect the new mental model.

What’s a mental model?

A mental model is a representation of how something works, which helps shape your approach to doing tasks. For example, you’re probably familiar with these mental models:

  • Fast-food restaurant. You arrive, order your food, pay, receive your food, sit down, then eat.
  • Fine-dining restaurant. You arrive, sit down, order your food, receive your food, eat, then pay.

The appearance of the restaurant helps you recognise what to do, and in what order.

Similarly, a mental model can helps you complete your online task—especially if it closely resembles a real-world experiences. For example:

  • Online shopping. You visit a store website, look at products, put some products in your shopping cart, choose delivery, and then pay. Later, you may track the delivery online, and receive the product in person.

To understand how a mental model affected the users, in this case study, let’s first look at how the product was, and then how it now is.

The original mental model: Convoluted

We’re not actually sure what to call the old mental model, but we can describe the user’s task.

The user’s task is to enter the details of each communication, so they can be tracked and reported to an industry regulator. In the original workflow, you would do this:

  1. Click a button to start a new communication. In the main form, enter what was said, as well as the communication’s date, title, and whether it is confidential.
    Step 1: enter part of the information and save the communication
    You cannot enter all the information yet, before saving. For example, you cannot enter the names of people involved in the communication. That’s an obvious part of any communication—but it cannot be entered in the main format all!
    Also, when saving, if you forgot any required information, you’ll get an error messages.
    After you save, the form cannot close because there is more work to do.
  2. In a series of separate forms, select the names of people and topic tags involved in the communication.
    Step 2: link other records to the communication
    The information from each secondary form does not appear on the main form. Instead, the main form increments a counter. For example, if you link two people, the counter says 2 contact names.
    The workflow is even more complicated if the person involved in the communication is representing an organization, because then you need to select both, and link them together, with an additional pop-up form.
    Eventually, you’ll convince yourself that all the required contact names, staff names, topics, and other records are linked and counted on the main form.
  3. Save the main form again. After that, you can close the main form.
    Step 3: Save the communication a second time
    After closing the main form, we observed users doing an additional step. Most users navigate to the list of communications to confirm that the new communication is listed. This compulsion to check reflects the uncertainty that the convoluted workflow creates.

What was the original mental model called?

It was a struggle to describe the original mental model in familiar, real-world terms. The user must fill in a series of complex forms, and they need to juggle back and forth between multiple forms in order to transfer numbers to the main form. This might be the conceptual equivalent to preparing and filing income taxes—and just as unpleasant to do.
We had to change the mental model to one that was clearer and familiar.

Here’s a mock-up of the original form (left) and new form:

The form, before and after

The new mental model: “Got gossip? Fill me in”

The redesigned interface combines two simple mental models that work well for the clerks who do most of the data entry. The form supports keyboard navigation, and can be pinned, so it is always open, ready to receive the next new communication.

  • Fill in a form. This is a mental model most people understand, and—provided the information they need to fill in is readily available—represents an easy task.
  • Gossip. During the data-entry stage, a typical communication record now resembles gossip, a mental model that’s readily understood, because humans tend to like stories: “Who said what about whom, how, where, when—and what’s our view of it?” Eliciting a story adds a bit of interest to a boring data-entry task.
    Of course, the gossip model is only suited for data entry. When reporting, the user interface groups, sorts, and presents the information as professional reports.

With the new mental model in mind, we carefully redesigned the data-entry form.

Now, users only click Save once. Fill in all the details at once; no more double saving. Behind the scenes, the same two-stage saving still occurs, but there’s no need to reflect this implementation in the workflow or user interface.

Save only once

Now users select items from lists. No more linking records. Behind the scenes, the linking of records still occurs, but there’s no need to mention that these are database records.

Just select the items—don't link
To reduce the number of controls, some types of items are now combined in one list, such as the different groups, individuals, and contacts involved in a communication.

Now, users see fewer fields. No more lengthy form to fill—or so it seems. When the form appears, it is shorter. Additional fields—the ones less used—are still available if the user clicks Show all, which lets users control the complexity on their screen.

Show All, for progressive disclosure

Now, the labels support the mental model. The labels help connect the information into a story—the “gossip” of the new mental model.

The labels stitch together the story
Existing users who tested the new labels did not like the changed labels but were able to perform their tasks successfully, so we left the labels as designed. We will re-assess the labels with new users, after a suitable period, to confirm we got the mental model right.

Show the mental model, hide the inner workings

Developers write programs that work. Developers who are proud of a particular solution may not understand why that solution should be hidden from users. But if the solution is complex, a good mental model keeps that solution out of sight, hidden behind the user interface.

It’s the job of usability analysts, user-experience designers, and software developers to work in tandem. Together, we identify suitable mental models and then clearly show these mental models through the user interface—even if it’s not an accurate reflection of what’s happening in the background.

The gap between the mental model—what the user thinks is happening—and the implementation model—what the developer built—is not a misrepresentation or an inaccuracy. It’s an additional layer that ensures our software products are more usable, users are more efficient, training costs are lower, support calls are fewer, and so on—all of which are legitimate business drivers.

Dec 302013
 

About ten years ago, usability testing of software got a lot easier, thanks to improved tools that let us see our products in the hands of users, along with the faces, voices, and actions of test participants. But then along came mobile devices, and usability testing of apps again became difficult.

Research needed an expensive facility

Around the turn of the century, it was possible to measure how an interface performed in the hands of customers, but that required an expensive lab that had cameras, microphones, and seats for observers behind one-way glass. In those days, only large companies had a budget for usability labs, so smaller companies made design decisions based on best guesses and opinions.

Due to budget constraints, I had no access to a usability lab. Instead, I would talk to software developers about the user interfaces we were building. “We can put text in the interface to explain how it works,” I’d say, and: “We can use different controls so it becomes more obvious how to use the product.” The developers and I were all interested in quality, but we didn’t always agree on what quality might look like. We relied on our opinions and personal biases to predict how software interfaces would perform in the hands of customers.

Then research got easy and affordable

One day, almost a decade ago, I heard about TechSmith Morae. This software was a game changer because it could turn any computer into a usability lab. TechSmith’s product evangelist, Betsy Weber and her user-research colleague, Josie Scott attended industry conferences and spoke tirelessly talked about this miracle product that could record someone’s actions—clicks and typing—along with everything on a computer screen, plus their face and voice. All we had to do was plug in a camera and microphone, because these were the days before laptops had built-in cameras and microphones. Usability practitioners embraced this product. We gave people tasks to complete while we used Morae to watch them in action.

Suddenly, we could invite developers and other stakeholders to watch live user testing from an adjacent room, where they could watch and listen in near real time. They could see the participant’s puzzled expressions. They could see where the user was mousing, what they looked at and clicked, and what they overlooked. We could also record everything and then, from the recordings, make video clips to show which parts of our software caused participants to struggle. Suddenly, it was easy to help every team member understand the plight of their customers.

One of my earliest participants, during a half-hour usability test, went from blaming herself to expressing extreme dislike of the product. For the development team, I made a video clip of the key points to show how the participant’s attitude toward the product changed from neutral to extreme dislike, over 28 minutes. The participant gave us permission to use the video for product research, but not for this blog, so I’ve paraphrased the video’s story here:

Test participant after 2 minutesTest participant after 8 minutesTest participant after 19 minutesTest participant after 28 minutes
This video was incredibly persuasive, because it showed the participant’s emotional reactions. When testing a product’s usability research, it’s humbling to see the product fail and awful to the product cause frustration and anger. But the point is to identify and fix problems before customers see it, and to get the development team thinking about involving users during the design process. (Usability testing also allowed me show user delight at innovative new features that worked well, to reinforce that our user-centered design process was working.)

Around the same time that TechSmith released Morae, Web 2.0 enabled the development of competing tools that partially overlapped with Morae’s features. The majority of these tools only worked on web pages, not on installed apps. Also, the majority of these tools did not let researchers see and hear their users in action, as you can read in the descriptions of testing tools from 2009.

Mobile made research difficult again

Much as we love mobile devices, they’ve make usability testing harder. Diverse operating systems and the free movement of mobile devices present challenges that we haven’t seen for almost a decade. While the tools that assess websites still work, there are no tools that provide rich data about installed apps. We’re back to external cameras and the rigs that hold them, and we have to ask participants to keep their phones in a fixed position for the camera. So we’re back to expensive labs and special equipment. What we need is software that can do on a mobile phone what software can already do on a laptop computer: capture and transmit

  • the app’s screen
  • the participant’s voice
  • the participant’s facial expressions
  • the participant’s taps or gestures
  • the participant’s typing or speech-to-text input

Ironically, smartphones, many tablets, and most hybrid devices have the required camera and microphone. Unfortunately, (in January 2014) no company offers software that can capture and transmit the data from mobile devices that run Android-, iOS-, Windows-, and BlackBerry operating systems. It’s especially the camera and voice data that helps researchers to understand how participants feel—are they puzzled, frustrated, or delighted?

Not giving up

Development teams tend to be made up of tenacious and skilled people—including business analysts, designers, developers—and they’ll follow the evidence. As practitioners, we want development teams to let go of the old ways and get them evolving toward evidence-based, user-centered design. And we’ll continue to look for ways to measure the empirical performance and the emotional impact of our designs, through usability testing.

Usability testing is such an excellent way to show development teams what’s usable. Building a product that is measurably more usable leads to persuasive case studies that show the benefit of usability and user experiences. It’s too bad that easy measurement tools are currently missing for apps that run on mobile devices. But a few challenges haven’t stopped us before, and won’t stop us now.