A decade ago, usability testing of software got a lot easier, thanks to improved tools that let us see our products in the hands of users, along with the faces, voices, and actions of test participants. But then along came mobile devices, and usability testing of apps again became difficult.
Research needed an expensive facility
Around the turn of the century, it was possible to measure how an interface performed in the hands of customers, but that required an expensive lab that had cameras, microphones, and seats for observers behind one-way glass. In those days, only large companies had a budget for usability labs, so smaller companies made design decisions based on best guesses and opinions.
Due to budget constraints, I had no access to a usability lab. Instead, I would talk to software developers about the user interfaces we were building. “We can put text in the interface to explain how it works,” I’d say, and: “We can use different controls so it becomes more obvious how to use the product.” The developers and I were all interested in quality, but we didn’t always agree on what quality might look like. We relied on our opinions and personal biases to predict how software interfaces would perform in the hands of customers.
Then research got easy and affordable
One day, almost a decade ago, I heard about TechSmith Morae. This software was a game changer because it could turn any computer into a usability lab. TechSmith’s product evangelist, Betsy Weber and her user-research colleague, Josie Scott attended industry conferences and spoke tirelessly talked about this miracle product that could record someone’s actions—clicks and typing—along with everything on a computer screen, plus their face and voice. All we had to do was plug in a camera and microphone, because these were the days before laptops had built-in cameras and microphones. Usability practitioners embraced this product. We gave people tasks to complete while we used Morae to watch them in action.
Suddenly, we could invite developers and other stakeholders to watch live user testing from an adjacent room, where they could watch and listen in near real time. They could see the participant’s puzzled expressions. They could see where the user was mousing, what they looked at and clicked, and what they overlooked. We could also record everything and then, from the recordings, make video clips to show which parts of our software caused participants to struggle. Suddenly, it was easy to help every team member understand the plight of their customers.
One of my earliest participants, during a half-hour usability test, went from blaming herself to expressing extreme dislike of the product. For the development team, I made a video clip of the key points to show how the participant’s attitude toward the product changed from neutral to extreme dislike, over 28 minutes. The participant gave us permission to use the video for product research, but not for this blog, so I’ve paraphrased the video’s story here:
This video was incredibly persuasive, because it showed the participant’s emotional reactions. When testing a product’s usability research, it’s humbling to see the product fail and awful to the product cause frustration and anger. But the point is to identify and fix problems before customers see it, and to get the development team thinking about involving users during the design process. (Usability testing also allowed me show user delight at innovative new features that worked well, to reinforce that our user-centered design process was working.)
Around the same time that TechSmith released Morae, Web 2.0 enabled the development of competing tools that partially overlapped with Morae’s features. The majority of these tools only worked on web pages, not on installed apps. Also, the majority of these tools did not let researchers see and hear their users in action, as you can read in the descriptions of testing tools from 2009.
Next: How mobile made research difficult, again. …
Mobile made research difficult again
Much as we love mobile devices, they’ve make usability testing harder. Diverse operating systems and the free movement of mobile devices present challenges that we haven’t seen for almost a decade. While the tools that assess websites still work, there are no tools that provide rich data about installed apps. We’re back to external cameras and the rigs that hold them, and we have to ask participants to keep their phones in a fixed position for the camera. So we’re back to expensive labs and special equipment. What we need is software that can do on a mobile phone what software can already do on a laptop computer: capture and transmit
- the app’s screen
- the participant’s voice
- the participant’s facial expressions
- the participant’s taps or gestures
- the participant’s typing or speech-to-text input
Ironically, smartphones, many tablets, and most hybrid devices have the required camera and microphone. Unfortunately, (in January 2014) no company offers software that can capture and transmit the data from mobile devices that run Android-, iOS-, Windows-, and BlackBerry operating systems. It’s especially the camera and voice data that helps researchers to understand how participants feel—are they puzzled, frustrated, or delighted?
Not giving up
Development teams tend to be made up of tenacious and skilled people—including business analysts, designers, developers—and they’ll follow the evidence. As practitioners, we want development teams to let go of the old ways and get them evolving toward evidence-based, user-centered design. And we’ll continue to look for ways to measure the empirical performance and the emotional impact of our designs, through usability testing.
Usability testing is such an excellent way to show development teams what’s usable. Building a product that is measurably more usable leads to persuasive case studies that show the benefit of usability and user experiences. It’s too bad that easy measurement tools are currently missing for apps that run on mobile devices. But a few challenges haven’t stopped us before, and won’t stop us now.