Usability of a potential design

Three-quarters of the way through a Five Sketches™ session, to help iterate and reduce the number of possible design solutions, the team turns to analysis. This includes a usability analysis.

 generative-design-stage-3

After Œ informing and defining the problem  without judgement  and  generating and sketching lots of ideas  without judgment , it’s often a relief for the team to start Ž  analysing and judging  the potential solutions by taking into account the project’s business goals, development goals, and usability goals).

But what are the usability goals? How can a team quickly assess whether potential designs meet those usability goals? One easy answer is to provide the team with an project-appropriate checklist.

Make your own checklist. You can make your own or find one on the Internet. To make your own, start with a textbook that you’ve found helpful and inspiring. For me, that’s About Face by Alan Cooper. To this, I add things that my experience tells me will help the team—my “favourites” or my pet peeves. In this last category I might consult the Ribbon section of the Vista UX Guide, the User Interface section of the  iPhone human-interface guidelines, and so on.

[local /wp-content/uploads/2009/04/make-usability-checklists.wmv]

Imagine users asking “Help me”

When you’re tempted to cut corners off your product, what do you cut?

Is it the usability?

Image derived from a photo by © Kacie KazerToday, I watched a video of people helping a motorised cardboard construction or “robot” to navigate. The so-called tweenbot has a flag on it that says “Help me,” and that asks people to send it on its way to its outdoor destination.

And people help it, time and again, get to where it “needs” to go.

On your next project, when you’re looking for corners to cut, I’d like you to imagine your future users wearing a sign that says “Help me.” Will that encourage you to help them?

The tweenbot pictured above is from Kacie Kinzer’s Tweenbots site, which creates a narrative about “our relationship to space and our willingness to interact with what we find in it.”

Teamwork reduces design risk

It takes a range of skills to develop a product. Each skill—embodied in the individuals who apply that skill—brings with it a different focus:

  • Product managers talk about features and market needs.
  • Business development  talk about revenue opportunities.
  • Developers talk about functionality.
  • Usability analysts talk about product- and user performance.
  • Interaction designers talk about the user experience.
  • QA talks about quality and defects.
  • Marketing talks about the messaging.
  • Technical communicators talk about information and assistance.
  • You may be thinking of others. (I’m sorry I’ve overlooked them.)

What brings all these together is design.

Proper design begins with information: a problem statement or brief, information about the targeted users and the context of use, the broad-brush business constraints, Design as a teamsome measurable outcomes (requirements, metrics, or goals), access to a subject-matter expert to answer questions about the domain and perhaps to present a competitor analysis.

Design continues by saturating the design space with ideas and, after the space is filled, analysing and iterating the ideas into possible solutions. The design process will raise questions, such as:

  • What is the mental model?
  • What are the use cases?
  • What is the process?

The analysis will trigger multiple rapid iterations of the possible solutions, as the possible solutions are worked toward a single design:

  • Development. What are the technical constraints? Coding costs? Effect on the stability of the existing code? Downstream maintenance costs?
  • Usability and experience. Does the design comply with heuristics? Does it comply with the standards of the company, the industry, and the platform? Does paper-prototyping predict that the designed solution is usable?
  • User experience. Will the designed solution be pleasing due to its quality, value, timeliness, efficiency, innovation, or the five other common measures of customer satisfaction?
  • Project sponsor. Is the design meeting the requirements? Is it reaching its goals or metrics?

It’s risky to expect one person—a developer, for example—to bring all these skills and answers to the table.  A team—properly facilitated and using an appropriate process—can reduce the risk and reliably produce great designs.

Ten-year-old advice

Fresh advice, still:

“Usability goals are business goals. Web sites that are hard to use frustrate customers, forfeit revenue, and erode brands.

Executives can apply a disciplined approach to improve all aspects of ease-of-use. Start with usability reviews to assess specific flaws and understand their causes. Then fix the right problems through action-driven design practices. Finally, maintain usability with changes in business processes.”

—McCarthy & Souza, Forrester Research, September 1998

How to test earlier

Involving users throughout the software-development cycle is touted as a way to ensure project success. Does usability testing count as user contact? You bet! But since most companies test their products later in the process, when it’s difficult to react meaningfully to the user feedback, here are two ways to get your testing done sooner.

Prioritise. Help the Development team rank the importance of the individual programming tasks, and then schedule the important tasks to complete early.

  • Prioritise and schedule the tasksIf a feature must be present in order to have meaningful interaction, then develop it sooner.
  • For example, email software that doesn’t let you compose the message is meaningless. To get meaningful feedback from users, they need to be able to type an e-mail.

    Developers often want to start with the technologically risky tasks. Addressing that risk early is good, but it must be balanced against the risk of a product that’s less usable or unusable.

  • If a feature need not be present or need not be working fully in order to have meaningful interaction, then provide hard-coded actions in the interim, and add those features later.
  • For example, if the email software lets users change the message priority from Standard to Important, hard-code it for the usability test so the priority is always Standard.

  • If a less meaningful feature must to be tested because of its importance to the business strategy, then develop it sooner.
  • For example, email software that lets users record a video may be strategically important for the company, though users aren’t expected to adopt it widely until most laptops ship with built-in cameras.

Schedule. For each feature to be tested, get the Development team to allocate time to respond to usability recommendations, and then ensure this time is neither reallocated to problem tasks, nor used up during the initial development effort of the to-be-tested features. Engage the developers by:

  • Sharing the scenarios in advance.
  • Updating them on your efforts to recruit usability-study participants.
  • After developers incorporate your recommendations, retesting and then reporting improvements in user performance.

Development planning that prioritises programming tasks based on the need to test, and then allows time in the schedule to respond to recommendations, is more likely to result in usable, successful products.

User mismatch: discard data?

When you’re researching users, every once in a while you come across one that’s an anomaly. You must decide whether to exclude their data points in the set or whether to adjust your model of the users.

Let me tell you about one such user. I’ll call him Bob (not his real name). I met Bob during a day of back-to-back usability tests of a specialised software package. The software has two categories of users:

  • What type of user is this?Professionals who interpret data and use it in their design work.
  • Technicians who mainly enter and check data. A senior technician may do some of the work of a professional user.

When Bob walked in, I went through the usual routine: welcome; sign this disclaimer; tell me about the work you do. Bob’s initial answers identified him as a professional user. Once on the computer, though, Bob was unable to complete the first step of the test scenario. Rather than try to solve the problem, he sat back, folded his hands, and said: “I don’t know how to use this.” Since Bob was unwilling to try the software, I instead had a conversation (actually an unstructured interview) with him. Here’s what I learned:

My observation For this product, this is…
Bob received no formal product training. He was taught by his colleagues. Typical of more than half the professionals. 
Bob has a university degree that is only indirectly related to his job.  Atypical of professionals.
He’s young (graduated 3 years ago). Atypical of professionals, but desirable because many of his peers are expected to retire in under a decade.
Bob moved to his current town because his spouse got a job there. He would be unwilling to move to another town for work. Atypical. Professionals in this industry typically often work in remote locations for high pay.
•  Bob is risk averse. Typical of the professionals.
•  He is easily discouraged, and isn’t inclined to troubleshoot. Atypical. Professionals take responsibility for driving their troubleshooting needs.
Bob completes the same task once or several times a day, with updated data each time. Atypical of professionals. This is typical of technicians. 

I decided to discard Bob’s data from the set.

The last two observations are characteristic of a rote user. Some professionals are rote users because they don’t know the language of the user interface, but this did not apply to Bob. There was a clear mismatch between the work that Bob said he does and both his lack of curiosity and non-performance in the usability lab. These usability tests took place before the 2008 economic downturn, when professionals in Bob’s industry were hard to find, so I quietly wondered whether hiring Bob had been a desperation move on the part of his employer.

If Bob had been an new/emerging type of user, discarding Bob’s data would have been a mistake. Imagine if Bob had been part of a new group of users:

  • What would be the design implications?
  • What would be the business implications?
  • Would we need another user persona to represent users like Bob?

Why products stay pre-chasm

I’ve spent some time working with legacy products—software for which the core code was written before “usability” was a term developers had heard of, back when developers were still called programmers.

I remember my first conversation—held last century—with a developer about his users and the usability of his legacy product. The product-adoption curve, with the chasmI used Geoffrey Moore’s book, Crossing the chasm, to introduce the idea that there are different types of users. The chasm (illustrated) shows five groups of users. The area under the curve represents the quantity of potential users, in the order in which they will typically adopt the product. Users who are technology enthusiasts and visionaries either enjoy or don’t mind being on the bleeding edge because of the benefits they get from using the product, according to Moore. Usability isn’t one of the typical benefits to the left of the chasm, where new products are first introduced. Wider adoption follows only after the product is made to cross the chasm—but this takes a concerted effort on the part of the development team.

What delays a product from crossing the chasm?

  • Revenues that are sufficient to let the company putter along.
  • Team members who themselves are tech enthusiasts or visionaries. This occurs, for example, when a nutritionist who also knows how to program develops software for nutritionists.
  • Team members who have infrequent exposure to newer user experiences. This could include someone who still uses Microsoft Office 2003, eschews social-networking applications on the web, or uses Linux.
  • Product managers whose roles are weakly defined or absent, making it more likely that new features get developed for the technical challenge of it, rather than for the user need or the business strategy.
  • Sales reps who ask for more functions in the product in order to make a sale—and a development culture that goes along with this.
  • Managers whose vision and strategic plan leave out user experience and usability.
  • Team members who have been on the team for decades.
  • Organisations that don’t use business cases to help them decide where to apply business resources.
  • Organisations with poor change-management practices—because moving a product across the chasm is more likely dramatic and disruptive than smooth and gradual.
  • Designers who are weak or absent in the process, so that “design” happens on the fly, during development.

If you recognise your work environment or yourself in this list, do you want to change things? If you do, but don’t know how, what actions can you take to learn the answer?

Which user involvement works

User-centred design (UCD) advocated involving users in the design process. Have you wondered what form that user involvement could take, and which forms lead to the most successful outcomes?

I recently came across data that Mark Keil published a while ago. He surveyed software companies and correlated project outcomes with the type of user access that designers and developers had.

Type of contact with users Effectiveness
  For custom software projects
  Facilitate teams, hold structured workshop with users, or use joint-application development (JAD). ██████████
  Expose users to a UI prototype or early version to uncover any UI issues. ██████
  Expose users to a prototype or early version to discover the system requirements. ████
  Hold one-on-one semi-structured or open-ended interviews with users. ████
  Test the product internally (acceptance testing rather than QA testing for bugs) to uncover new requirements. ██
  Use an intermediary to define user goals and needs, and to convey them to designers and developers. ██
  Collect user questions, requirements, and problems indirectly, by e-mail or online locations.
  For packaged or mass-market software projects
  Listen to live/synchronous phone support, tech-support, or help-desk calls. ████████
  Hold one-on-one semi-structured or open-ended interviews with users. ██████
  Expose users to a UI prototype or early version to uncover any UI issues. ████
  Convene a group of users, from time to time, to discuss usage and improvements. ████
  Expose users to a prototype or early version to discover the system requirements. ██
  Test the product internally (acceptance testing rather than QA testing for bugs) to uncover new requirements. ██
  Consult marketing and sales people who regularly meet with and listen to customers. ██
  At trade shows, show a mock-up or prototype to users and get their feedback.
Not reported as effective in this 1995 source
  Conduct a (text) survey of a sample of users.
  Conduct a usability test to “tape and measure” users in a formal usability lab. (This study precedes such products as TechSmith Morae.)
  Observe users for an extended period, or conduct ethnographic research.
  Conduct focus groups to discuss the software.

Although Keil’s article includes quantitative data, his samples are small. I opted to show only the relative usefulness of various methods. My descriptions, above, are long because the original article uses 1995 terms that have shifted in meaning. I believe some of the categories now overlap, due to changes in technology and method. For example, getting users to try a prototype of the UI in order to uncover UI issues sounds like the early usability testing that I do with TechSmith Morae, yet the 1995 results gave these activities very different effectiveness ratings.

For details, see the academic article by Mark Keil (Customer-developer links in software development). Educational publishers typically require a fee for access.

These methods also relate to research I’m doing on epistemology of usability analysis.

Internet Explorer leapfrogs Firefox?

Previously, I wrote about GUI—when to copy it and when to design it. When your competition has something better, I recommended you design, to leapfrog your competitor. Here’s an example of two competing web browsers:

Click to enlarge

At first glance, the new Internet Explorer 8 address bar looks like a copy of Firefox’s existing awesome bar, but click the image for an enlarged view. You’ll see that:

  • The on-the-fly suggestions are grouped as History and Favourites.
  • Each group lists only five items, by default.
  • To remove an item (think stale links and mistyped URLs), highlight the item and then click the  ×  that appears.

Compared to Internet Explorer 7, Firefox had a better address bar. And just as clearly, the additional features of the Internet Explorer 8 address bar are an attempt to leapfrog Firefox. After the public has used IE8 for three months, it’ll be interesting to hear whether users think Microsoft succeeded.

Read the related post, GUI: Copy it or design it.

Epistemology of usability studies

Currently, I’m conducting research on usability analysis and on how Morae software might influence that. My research gaze is rather academic, in that I’m especially interested in the epistemology of usability analysis.

One of my self-imposed challenges is to make my research relevant to usability practitioners. I’m a practitioner and CUA myself, and I have little time for academic exercises because I work where the rubber hits the road. This blog post outlines what I’m up to.

At Simon Fraser University, I learned that epistemological approaches have different assumptions about what is knowable. On one side (below, left), it’s about numbers, rates, percentages, graphs, grids, tables, proving absolute truths. On the other side, (below, right) it’s about seeking objectivity while knowing that it’s impossible because everything has a cultural context. The epistemology you choose, when doing research, depends on what you believe. And the epistemology dictates what methods you use, and how you report your results.

You can be
certain of
what you know.
You cannot be
objective about
what you know.

Let’s look at some examples.

Study 1 fits with the view (above, left) that “you can be certain of what you know.” I plan and conduct a quantitative study to measure the time it takes a series of users to complete two common tasks in a software package: upgrading to the latest version of the software, and activating the software. I make appointments with users. In my workplace, I give each user a scenario and a computer. I observe them and time them as they complete the tasks by using the software package. My hope is that statistical analysis will give me results that I can report, including the average time on task with error bars, as the graph (right) illustrates.

Study 2 fits with the view (above, right) that “you cannot be objective about what you know” because all research takes place within a context. To lessen the impact of conducting research, I contact users to ask if I can study their workplace. I observe each user for a day. My hope is to analyse the materials and interaction that I’ve observed in context—complete with typical interruptions, distractions, and stimuli. Since a new software version has just been released, my hope is that I’ll get to observe them as they upgrade. I’ll report any usability issues, interaction-design hurdles, and unmet needs that I observe.

The above are compilations of studies I conducted.

  • Study 1 revealed several misunderstandings and installation problems, including a user who abandoned the installation process because he believed it was complete. I was able to report the task success rate and have the install wizard fixed.
  • Study 2 revealed that users write numbers on paper and then re-enter them elsewhere, which had not been observed when users visited our site for usability testing. One user told me: “I never install  the latest version because the updates can be unstable,” and another said: “I only upgrade if there’s a fix for a feature I use” to avoid unexpected new defects. I was able to report the paper-based workaround and the users’ feelings about quality, for product managers to reflect in future requirements.

Clearly, there’s more than one way to conduct research, and not every method fits every team. That’s an idea that can be explored at length.

This has me wondering: which method fits what, when, where? Is there a relationship between a team’s development process and the approach to user research (epistemology) that it’s willing to embrace? …between its corporate usability maturity and the approach?

Those are two of the lines of inquiry in my research at Simon Fraser University.

If you liked this post, you may also like Are usability studies experiments?