Poor usability is a form of tech debt

If you’ve worked in software development for a while, you may have noticed that work on usability gets postponed more often than work on new features and functions. You could see this as a form of tech debt. It accumulates with every release.

Bike-tire pumpWhat contributes to this accumulation?

  • Timing. Some usability issues aren’t identified until Alpha testing with customers begins, or until after the product is released.
  • Competition. There’s often pressure to leap ahead or catch up with competitors by adding new features and functions.
  • Budgeting. If multiple teams compete for a share of the development budget, something shiny and new may attract more funding than boring old maintenance, upgrades, and tech debt.

It’s not an either/or proposition. With every release you can give your product a bit more usability. And you can do this at a low-to-moderate cost and low-to-moderate risk.

With every software release, you can tackle parts of your usability tech debt. For example:

  • rewrite error messages to be clear.
  • replace jargon in the user interface with plain language.
  • update broken information architecture.
  • refactor your code so it prevents user errors.
  • shorten complex workflows.

These are listed from lowest-cost/lowest-risk to higher-cost/higher-risk. Even the last one—changing workflows—can be safe and affordable, as shown by a case study.

Make error messages clearer

Web usability consultant and human-computer interaction researcher, Jakob Nielsen, proposed a list of 10 heuristics. These broad rules for usable software ask for visibility of the system status. When there’s an error, “help users recognize, diagnose, and recover from errors,” and adds that “error messages should be expressed in plain language, precisely indicate the problem, and constructively suggest a solution.”

Rewriting error messages is low-cost and low-risk.

Humorous error message. The title is: "You did it wrong". The message body is "If we tell you how to fix it, you won't learn. What do you want to do next?" The buttons are: "Struggle" and "Cry".
A humorous error message that is unhelpful and that mocks the user.

Replace jargon with plain language

If usability testing shows that some groups of customers or users can’t make sense of the text in your user interface, fix it.

Jargon is useful as shorthand that lets professionals communicate precisely and quickly. But when your intended audience includes novices, or non-professionals, it’s better to replace it with plain language.

And even experts benefit from plain language.

These changes tend to be low-cost and low-risk to implement, with substantial improvements in comprehension.

Update the information architecture

As a team develops more functions, it gets harder to shoehorn them into the software in a way that makes sense to users. The information architecture, or IA, that worked for a smaller set of functions becomes unclear. This also applies to a website as more topics and tasks are added.

IA research is relatively easy and low-cost to conduct. For example, I worked on a software package that had many menus and commands. We added a logger and then analysed which commands were used together, and which were used in succession. We wanted this data so we could redesign the menus and toolbars that were organised to match how users worked. And first-click testing meant we could confirm that the high-level IA would work.

With the help of a user researcher, IA studies on possible ways to organise websites are even easier and faster to do—an important step when adding new elements and new areas of content to a website or service. Tree testing and reverse tree-testing (or card-sorting) are the standard methods for this.

A sample dendrogram generated by a tree-testing study. It suggests a way to organise topics on a website.
A sample dendrogram generated by a tree-testing study. It suggests a way to organise topics on a website.

Prevent errors—both slips and mistakes

As you design and build functions, prevent what are likely to be common problems for users. If you’re not sure what those might be, ask your user researcher to find out. Then refactor the code to

  • remove error-prone conditions, or
  • prevent error-prone conditions by adding input constraints, or
  • detect the error-prone state—and then ask the user to confirm what they want, or suggest alternatives,

and to

  • design the feature to be flexible, without stopping the user in their tracks, so the error to be fixed later.

Before coding begins, encourage your user-experience designer to work with the development team to identify and address possible errors. The risk of this work is rolled into the risk of building the feature. The cost of writing code to prevent errors is low to moderate at the time the feature is built, higher if fixed later.

Complex workflows

Complexity hurts user performance, and increase the need for training and user support.

Fixing complexity may be cheaper than you think. For example, I redesigned the user interface of a data-entry task, without a single change to the back-end code or database architecture. This reduced the cost and the risk of the code changes.

After we built the redesigned user interface, user research showed that it reduced task time by 60%, data-entry errors, and the need for training. You can read about it in detail in this post about mental models.

An simplified illustration of the overly complicated workflow, showing one database record linked to three other database records.
A sample illustration from the mental-models case study.