Teamwork reduces design risk

It takes a range of skills to develop a product. Each skill—embodied in the individuals who apply that skill—brings with it a different focus:

  • Product managers talk about features and market needs.
  • Business development  talk about revenue opportunities.
  • Developers talk about functionality.
  • Usability analysts talk about product- and user performance.
  • Interaction designers talk about the user experience.
  • QA talks about quality and defects.
  • Marketing talks about the messaging.
  • Technical communicators talk about information and assistance.
  • You may be thinking of others. (I’m sorry I’ve overlooked them.)

What brings all these together is design.

Proper design begins with information: a problem statement or brief, information about the targeted users and the context of use, the broad-brush business constraints, Design as a teamsome measurable outcomes (requirements, metrics, or goals), access to a subject-matter expert to answer questions about the domain and perhaps to present a competitor analysis.

Design continues by saturating the design space with ideas and, after the space is filled, analysing and iterating the ideas into possible solutions. The design process will raise questions, such as:

  • What is the mental model?
  • What are the use cases?
  • What is the process?

The analysis will trigger multiple rapid iterations of the possible solutions, as the possible solutions are worked toward a single design:

  • Development. What are the technical constraints? Coding costs? Effect on the stability of the existing code? Downstream maintenance costs?
  • Usability and experience. Does the design comply with heuristics? Does it comply with the standards of the company, the industry, and the platform? Does paper-prototyping predict that the designed solution is usable?
  • User experience. Will the designed solution be pleasing due to its quality, value, timeliness, efficiency, innovation, or the five other common measures of customer satisfaction?
  • Project sponsor. Is the design meeting the requirements? Is it reaching its goals or metrics?

It’s risky to expect one person—a developer, for example—to bring all these skills and answers to the table.  A team—properly facilitated and using an appropriate process—can reduce the risk and reliably produce great designs.

Reboot or restart?

I find this funny for two reasons.

  • It spoofs the “choice” that many error messages present to us. It amounts to: “The program failed. OK?” It’s not OK, but your only choice is to “agree” with the message.
  • It spoofs the developer jargon that we read in error messages: a reboot versus a system restart.

Ten-year-old advice

Fresh advice, still:

“Usability goals are business goals. Web sites that are hard to use frustrate customers, forfeit revenue, and erode brands.

Executives can apply a disciplined approach to improve all aspects of ease-of-use. Start with usability reviews to assess specific flaws and understand their causes. Then fix the right problems through action-driven design practices. Finally, maintain usability with changes in business processes.”

—McCarthy & Souza, Forrester Research, September 1998

How to test earlier

Involving users throughout the software-development cycle is touted as a way to ensure project success. Does usability testing count as user contact? You bet! But since most companies test their products later in the process, when it’s difficult to react meaningfully to the user feedback, here are two ways to get your testing done sooner.

Prioritise. Help the Development team rank the importance of the individual programming tasks, and then schedule the important tasks to complete early.

  • Prioritise and schedule the tasksIf a feature must be present in order to have meaningful interaction, then develop it sooner.
  • For example, email software that doesn’t let you compose the message is meaningless. To get meaningful feedback from users, they need to be able to type an e-mail.

    Developers often want to start with the technologically risky tasks. Addressing that risk early is good, but it must be balanced against the risk of a product that’s less usable or unusable.

  • If a feature need not be present or need not be working fully in order to have meaningful interaction, then provide hard-coded actions in the interim, and add those features later.
  • For example, if the email software lets users change the message priority from Standard to Important, hard-code it for the usability test so the priority is always Standard.

  • If a less meaningful feature must to be tested because of its importance to the business strategy, then develop it sooner.
  • For example, email software that lets users record a video may be strategically important for the company, though users aren’t expected to adopt it widely until most laptops ship with built-in cameras.

Schedule. For each feature to be tested, get the Development team to allocate time to respond to usability recommendations, and then ensure this time is neither reallocated to problem tasks, nor used up during the initial development effort of the to-be-tested features. Engage the developers by:

  • Sharing the scenarios in advance.
  • Updating them on your efforts to recruit usability-study participants.
  • After developers incorporate your recommendations, retesting and then reporting improvements in user performance.

Development planning that prioritises programming tasks based on the need to test, and then allows time in the schedule to respond to recommendations, is more likely to result in usable, successful products.

Usability, not drowning

Is it a usability experiment? Are they testing the theory that a baby in a bucket won’t fall over as easily as a baby in a bath?

babies-in-buckets

Photo from Yahoo.co.jp news.

Actually, it’s a photo that accompanies a news story about babies in IJmuiden, The Netherlands, whose parents were learning baby massage. The warm bath in advance is supposed to make them feel like when they were in the uterus.

Common tasks losing usability

There’s been a loss of usability for people who type text.

Like me, you may have experienced these unwelcome experiences:

  • After typing a long message in Facebook, when I click send, I get a page error and my entire message is lost.
  • After typing a post in WordPress, if the server has gone down or I press an unintended keyboard shortcut, I lose my entire post.

By comparison, if Word stops responding, it will (try to) offer me my unsaved changes when I reopen the document.

With users increasingly doing their work in browsers, why don’t browsers remember the text we were typing? Or why doesn’t the operating system? Or why doesn’t the weblication make backups? Or the server?

When my actions in a browser fail, imagine a message such as this:

Supporting the user after a failure in the browser

This hints at just one possible solution. In the spirit of Five Sketches™, I bet you can come up with at least five more ways to support users after they’ve lost the work they typed in a browser.

Which user involvement works

User-centred design (UCD) advocated involving users in the design process. Have you wondered what form that user involvement could take, and which forms lead to the most successful outcomes?

I recently came across data that Mark Keil published a while ago. He surveyed software companies and correlated project outcomes with the type of user access that designers and developers had.

Type of contact with users Effectiveness
  For custom software projects
  Facilitate teams, hold structured workshop with users, or use joint-application development (JAD). ██████████
  Expose users to a UI prototype or early version to uncover any UI issues. ██████
  Expose users to a prototype or early version to discover the system requirements. ████
  Hold one-on-one semi-structured or open-ended interviews with users. ████
  Test the product internally (acceptance testing rather than QA testing for bugs) to uncover new requirements. ██
  Use an intermediary to define user goals and needs, and to convey them to designers and developers. ██
  Collect user questions, requirements, and problems indirectly, by e-mail or online locations.
  For packaged or mass-market software projects
  Listen to live/synchronous phone support, tech-support, or help-desk calls. ████████
  Hold one-on-one semi-structured or open-ended interviews with users. ██████
  Expose users to a UI prototype or early version to uncover any UI issues. ████
  Convene a group of users, from time to time, to discuss usage and improvements. ████
  Expose users to a prototype or early version to discover the system requirements. ██
  Test the product internally (acceptance testing rather than QA testing for bugs) to uncover new requirements. ██
  Consult marketing and sales people who regularly meet with and listen to customers. ██
  At trade shows, show a mock-up or prototype to users and get their feedback.
Not reported as effective in this 1995 source
  Conduct a (text) survey of a sample of users.
  Conduct a usability test to “tape and measure” users in a formal usability lab. (This study precedes such products as TechSmith Morae.)
  Observe users for an extended period, or conduct ethnographic research.
  Conduct focus groups to discuss the software.

Although Keil’s article includes quantitative data, his samples are small. I opted to show only the relative usefulness of various methods. My descriptions, above, are long because the original article uses 1995 terms that have shifted in meaning. I believe some of the categories now overlap, due to changes in technology and method. For example, getting users to try a prototype of the UI in order to uncover UI issues sounds like the early usability testing that I do with TechSmith Morae, yet the 1995 results gave these activities very different effectiveness ratings.

For details, see the academic article by Mark Keil (Customer-developer links in software development). Educational publishers typically require a fee for access.

These methods also relate to research I’m doing on epistemology of usability analysis.

Epistemology of usability studies

Currently, I’m conducting research on usability analysis and on how Morae software might influence that. My research gaze is rather academic, in that I’m especially interested in the epistemology of usability analysis.

One of my self-imposed challenges is to make my research relevant to usability practitioners. I’m a practitioner and CUA myself, and I have little time for academic exercises because I work where the rubber hits the road. This blog post outlines what I’m up to.

At Simon Fraser University, I learned that epistemological approaches have different assumptions about what is knowable. On one side (below, left), it’s about numbers, rates, percentages, graphs, grids, tables, proving absolute truths. On the other side, (below, right) it’s about seeking objectivity while knowing that it’s impossible because everything has a cultural context. The epistemology you choose, when doing research, depends on what you believe. And the epistemology dictates what methods you use, and how you report your results.

You can be
certain of
what you know.
You cannot be
objective about
what you know.

Let’s look at some examples.

Study 1 fits with the view (above, left) that “you can be certain of what you know.” I plan and conduct a quantitative study to measure the time it takes a series of users to complete two common tasks in a software package: upgrading to the latest version of the software, and activating the software. I make appointments with users. In my workplace, I give each user a scenario and a computer. I observe them and time them as they complete the tasks by using the software package. My hope is that statistical analysis will give me results that I can report, including the average time on task with error bars, as the graph (right) illustrates.

Study 2 fits with the view (above, right) that “you cannot be objective about what you know” because all research takes place within a context. To lessen the impact of conducting research, I contact users to ask if I can study their workplace. I observe each user for a day. My hope is to analyse the materials and interaction that I’ve observed in context—complete with typical interruptions, distractions, and stimuli. Since a new software version has just been released, my hope is that I’ll get to observe them as they upgrade. I’ll report any usability issues, interaction-design hurdles, and unmet needs that I observe.

The above are compilations of studies I conducted.

  • Study 1 revealed several misunderstandings and installation problems, including a user who abandoned the installation process because he believed it was complete. I was able to report the task success rate and have the install wizard fixed.
  • Study 2 revealed that users write numbers on paper and then re-enter them elsewhere, which had not been observed when users visited our site for usability testing. One user told me: “I never install  the latest version because the updates can be unstable,” and another said: “I only upgrade if there’s a fix for a feature I use” to avoid unexpected new defects. I was able to report the paper-based workaround and the users’ feelings about quality, for product managers to reflect in future requirements.

Clearly, there’s more than one way to conduct research, and not every method fits every team. That’s an idea that can be explored at length.

This has me wondering: which method fits what, when, where? Is there a relationship between a team’s development process and the approach to user research (epistemology) that it’s willing to embrace? …between its corporate usability maturity and the approach?

Those are two of the lines of inquiry in my research at Simon Fraser University.

If you liked this post, you may also like Are usability studies experiments?

Standard OK-Cancel button order

I have two stories about command buttons.

Quite a few years ago, a team member walked me through a new dialog box. He entered some data, and then unintentionally clicked the Cancel button. He made this error twice in a row, thus losing his changes twice in a row. I pointed out that the OK and Cancel buttons were in the wrong order. The developer switched the buttons to the Windows-standard layout (below, right), and the user-performance problem was solved.

A few years later, on a different project, not only were the buttons in non-standard order, they used non-standard wording and they used coloured icons. My request to follow the Windows standard was met only half-way and then sent for Beta testing before I saw it again. The buttons were now in the correct order, but the button names were changed, and the names and icons were still non-standard. Beta testers loudly protested the change. (Beta testers are often expert users, and experts abhor any change that slows them down.) At the time, the company was only a few steps up the Neilsen Corporate Usability Maturity model, so instead of completing the change to Windows-standard OK and Cancel buttons, the buttons were rolled back, to appease the protesting Beta users. I found out too late to retest with Windows-standard buttons, so there was no data to convince the developers. For me, it was an opportunity to learn from failure. 🙂

Why is non-standard so hard?

Try this Stroop test (right). Ignore the words. Instead, identify the colours, out loud. No doubt, the second panel went slower and took more effort.

Try the variation, at left. Find the first occurrence of the word Blue. Next, find the first occurrence of the colour .

Just as mismatches between text and colour slow your Stroop-test performance, mismatches between standard and non-standard OK and Cancel buttons slow user performance. Our Beta users clicked the wrong buttons—a huge waste of their time—because the new solution didn’t follow any standard. The Beta testers were right to protest, but wrong in their demand to revert to the original non-standard state. (See: Customers can’t do your job.)

Users learn GUI patterns—patterns that are widely reinforced by user experience—and users expect GUI to behave predictably, so it’s unwise to deviate radically from the standards, unless there are product-management reasons to do so.

I’ll write more about following standards versus designing something new in the coming few posts.

P.S. It looks like Jakob Nielsen got here before me.

Users are not used to it

For several years, I did usability testing on CAD-style software that was full of legacy code, some of which preceded Windows 98.

Some of that legacy code dealt with CAD objects that displayed on screen. To work with these objects, users had a choice of menu commands and toolbar buttons, supplemented by dialog boxes. For example, to move an object, users could not simply click and drag it; they would choose a command, click the object, and then, in a dialog box, enter the distance to move the object.

That’s the way CAD programs worked when that legacy code was originally written.

Over the years, during my usability testing of various features, I noticed a growing trend toward direct manipulation. That is, to work with an object, users would try to click it or drag it. They would do this without thinking. Even long-time users, faced with a new feature (studies from 2005-2006), would try direct manipulation first:

  • 100% of the test subjects clicked a cube, trying to select it.
  • 100% of the test subjects dragged a point or line, trying to move it.
  • 100% of the test subjects clicked in the window, trying to create a point.
  • 100% dragged across points, trying to select them.

But the new features were built on the legacy code, so had the command-driven interaction style. A simple click on the object was usually a dead end for users.

And the users would say: “Darn,” and then look for another vector—another pathway—to complete the task.

The reasons we didn’t provide direct manipulation:

  • “Our users are used to the way it is now.” Clearly, usability-test results negated that argument. Users are not so accustomed to old-style interaction, because their first instinct for new CAD tasks in an existing product was direct manipulation.
  • “There’s not enough bang for the buck” because the opportunity cost (the cost of skipping other possible projects) was deemed too great. It’s hard to argue with this, as a usability analyst. The company opted for more features, and may have increased its risk of being leapfrogged by the competition, as discussed in an earlier blog.