Three weeks ago, a client called me. They had just completed release 1.0 of a new Web application that will replace their current flagship product. The client was asking about summative usability testing to evaluate how well the product performs in the hands of users, because they want their customers to succeed.
Since the product is an enterprise-wide product that requires training, one thing the client specifically asked about was whether the Help is a help to users.
A quick heuristic review I did turned up no obvious problems in the Help, so we decided on user observation with scenarios. In a preparatory dry run done a few weeks ago, I supplied a participant with a few scenarios and some sample data. The participant I observed was unable to start two of the scenarios, and completed the third scenario incorrectly by adding data to the wrong database.
The Help didn’t help her. The participant was able to find the right Help topic, but she completely misinterpreted the first step in the Help’s instructions.
The team had not anticipated the apparent problem that turned up during the dry run. Assuming it is a real problem—and this can’t be more than an assumption given the sample size of 1—this story nicely illustrates the benefit of summative testing, as you’ll see below.
Best practices working together
The team, including a product manager, several developers, a technical communicator as Help author, and me as a contract usability analyst, used these best practices:
- The Help author used a single-sourcing method. The most common GUI names, phrases, and sentences, are re-used, inserted into many topics from one source, like a variable. In almost every Help topic, the problematic first step was one such re-usable snippet.
- The product manager assesses the bugs based on severity and cost, ensuring the low-hanging fruit and most serious of defects get priority.
- In a heuristic review of the Help, I (wearing a usability-analyst hat) did not predict that the first step in most topics would be misinterpreted. Heuristic reviews, when conducted by a lone analyst, typically won’t predict all usability problems.
- The developers use an Agile method. At this stage of their development cycle, they build a new version of the product every Friday, and, after testing, publish it the following Friday.
After the dry run uncovered the apparent problem, the product manager said: “Let’s fix it.” Since the Help author used re-usable snippets, rewording in one place quickly fixed the problem throughout the Help. And the company’s Agile software development method meant the correction has already been published.
Was this the right thing to do? Should an error found by one participant during a dry run of upcoming usability tests result in a change? The team’s best practices certainly made this change so inexpensive as to be irresistible. With the first corporate customer already migrated to the new product, my client has a lot riding on this. I can’t be certain this rewritten sentence has improved the Help, but—along with the other bugs they’ve fixed—I know it increases my client’s confidence and pride in their product’s quality.
It’ll be interesting to see what the upcoming user observations turn up.
Reminding myself of things I already know
The actual user-observation sessions are still ten days away, but the dry run reminded me of things I already know:
- Despite each professional’s best efforts, there will always be unanticipated outcomes where users are involved. Users have a demonstrated ability to be more “creative” than designers, developers, and content authors, simply by misinterpreting and making unintended use/misuse of our work.
- The best practices in each discipline can dovetail and work together to allow rapid iteration of the product by the team as a whole. A faster response means fewer users will be affected and the cost of support—and of the rapid iteration—will be lower. A good development process adjusts practices across teams (product management, research, development, user experience, design, tech-comm, quality assurance) so the practices dovetail rather than conflict.
- Summative testing helps validate and identify what needs to be iterated. Testing earlier and more often means that fewer or perhaps no users will be affected. Testing earlier and more often is a great way to involve users, a requirement for user-centred design, or UCD. It also changes the role of testing from summative to formative, as it shapes the design of the product before release, rather than after.
2 Replies to “Testing in the UX-design process”
Ah. Re-read the first paragraph, Lois, and you’ll see they are asking for summative testing, with version 1.0 and the first bug fixes (plus many of the recommendations from my heuristic review of the software) already out the door. But … they have held off shipping version 1.0.x to other customers who, I’m told, are clamouring for this product. They obviously have a heck of a sales-and-demo team, since I’ve never heard of enterprises begging to be migrated to version 1.0, but they do have some unique features that industry wants.
My user observations will take place in tandem with scheduled training for the second customer. There’s pressure to get the testing and recommendations quickly, so I’m glad the client is Agile. They’ll be rolling out iterations every week or two.
I am impressed that the company is willing to spend the time and money, and has the time in their release schedule, to allow such testing.
Comments are closed.