Leaner, more agile

This week, I’m attending a few days of training in agile software development, in an Innovel course titled Lean, Agile and Scrum for Project Managers and IT Leadership.

My first exposure to agile was in Desiree Sy‘s 2005 presentation, Strategy and Tactics for Agile Design: A design case study, to the Usability Professionals Association (UPA) annual conference in Montreal, Canada. It was a popular presentation then, and UPA-conference attendees continue to be interested in agile methods now. This year, at the UPA conference in Portland, USA, a roomful of usability analysts and user-experience practitioners discussed the challenges that agile methods present to their practice. One of the panellists told the room: “Agile is a response to the classic development problem: delivering the wrong product, too late.”

Continue reading “Leaner, more agile”

Design requires courage and trust, not just user involvement

Designing is usually a rewarding activity, but the path from start to finish can be filled with frustration and even panic. I’ve seen design processes work—and come to the realisation that “My own designs benefited from rapid iteration!”

The benefit of designThese humbling experiences helped me learn to trust the process, even in the face of frustration or panic. It’s these experiences that give me the courage to follow the design process, even when it isn’t clear how to resolve the tension between conflicting design constraints.

In the face of an unknown, individuals and especially teams tend to turn to knowns. If needed, they’ll manufacture the known data, by deferring the choice to users. Here’s part of what Larry Constantin wrote about courage in software design, in a paper that advocates for user involvement at the right time:

Most damning and least recognized among the limitations of user-centered design is the way it subtly discourages courage. Courage is one of the central tenets of extreme programming and agile development methods. […] User-centered design makes it too easy for designers to abdicate responsibility in deference to user preference, user opinion, and user bias. In truth, it is hard to stick with something you know works when users are screwing up their faces at it. What if you are wrong? What if you are not as good a designer as you thought you were? It takes real courage and conviction to stand up for an innovative design in the face of users who complain that it is not what they expected or who want it to work just like some other software or who object to certain sorts of features as a matter of course. It takes responsible judgment to know when to listen to users and when to ignore them.

In the many design sessions I have facilitated, three times I’ve seen that lack of courage expressed by a participant. Each time, it sounded like a mix of panic and frustration:

The solution has been on the wall since the first round!

The design sessions I facilitate ask participants to saturate the design space with lots of ideas. They each bring five sketches—five substantially different ideas—and then, after sharing their ideas with the other participants, they rapidly iterate the first 15 or 20 sketches to develop even more. All this takes place before any analysis.

When the goal is to saturate the design space—to identify as many solutions as possible in a short time—there’s more to influence the design once the analysis begins. Inevitably, the design that the team decides on was not already on the wall. Motivated design participants quickly learn this, and—in most cases—become advocates of the process.

For most development teams, the Five Sketches™ process I introduce is a departure from the status quo, so it takes courage for their team members to take a stand, to say “I will use this process” for design problems that need it.

How to test earlier

Involving users throughout the software-development cycle is touted as a way to ensure project success. Does usability testing count as user contact? You bet! But since most companies test their products later in the process, when it’s difficult to react meaningfully to the user feedback, here are two ways to get your testing done sooner.

Prioritise. Help the Development team rank the importance of the individual programming tasks, and then schedule the important tasks to complete early.

  • Prioritise and schedule the tasksIf a feature must be present in order to have meaningful interaction, then develop it sooner.
  • For example, email software that doesn’t let you compose the message is meaningless. To get meaningful feedback from users, they need to be able to type an e-mail.

    Developers often want to start with the technologically risky tasks. Addressing that risk early is good, but it must be balanced against the risk of a product that’s less usable or unusable.

  • If a feature need not be present or need not be working fully in order to have meaningful interaction, then provide hard-coded actions in the interim, and add those features later.
  • For example, if the email software lets users change the message priority from Standard to Important, hard-code it for the usability test so the priority is always Standard.

  • If a less meaningful feature must to be tested because of its importance to the business strategy, then develop it sooner.
  • For example, email software that lets users record a video may be strategically important for the company, though users aren’t expected to adopt it widely until most laptops ship with built-in cameras.

Schedule. For each feature to be tested, get the Development team to allocate time to respond to usability recommendations, and then ensure this time is neither reallocated to problem tasks, nor used up during the initial development effort of the to-be-tested features. Engage the developers by:

  • Sharing the scenarios in advance.
  • Updating them on your efforts to recruit usability-study participants.
  • After developers incorporate your recommendations, retesting and then reporting improvements in user performance.

Development planning that prioritises programming tasks based on the need to test, and then allows time in the schedule to respond to recommendations, is more likely to result in usable, successful products.

Which user involvement works

User-centred design (UCD) advocated involving users in the design process. Have you wondered what form that user involvement could take, and which forms lead to the most successful outcomes?

I recently came across data that Mark Keil published a while ago. He surveyed software companies and correlated project outcomes with the type of user access that designers and developers had.

Type of contact with users Effectiveness
  For custom software projects
  Facilitate teams, hold structured workshop with users, or use joint-application development (JAD). ██████████
  Expose users to a UI prototype or early version to uncover any UI issues. ██████
  Expose users to a prototype or early version to discover the system requirements. ████
  Hold one-on-one semi-structured or open-ended interviews with users. ████
  Test the product internally (acceptance testing rather than QA testing for bugs) to uncover new requirements. ██
  Use an intermediary to define user goals and needs, and to convey them to designers and developers. ██
  Collect user questions, requirements, and problems indirectly, by e-mail or online locations.
  For packaged or mass-market software projects
  Listen to live/synchronous phone support, tech-support, or help-desk calls. ████████
  Hold one-on-one semi-structured or open-ended interviews with users. ██████
  Expose users to a UI prototype or early version to uncover any UI issues. ████
  Convene a group of users, from time to time, to discuss usage and improvements. ████
  Expose users to a prototype or early version to discover the system requirements. ██
  Test the product internally (acceptance testing rather than QA testing for bugs) to uncover new requirements. ██
  Consult marketing and sales people who regularly meet with and listen to customers. ██
  At trade shows, show a mock-up or prototype to users and get their feedback.
Not reported as effective in this 1995 source
  Conduct a (text) survey of a sample of users.
  Conduct a usability test to “tape and measure” users in a formal usability lab. (This study precedes such products as TechSmith Morae.)
  Observe users for an extended period, or conduct ethnographic research.
  Conduct focus groups to discuss the software.

Although Keil’s article includes quantitative data, his samples are small. I opted to show only the relative usefulness of various methods. My descriptions, above, are long because the original article uses 1995 terms that have shifted in meaning. I believe some of the categories now overlap, due to changes in technology and method. For example, getting users to try a prototype of the UI in order to uncover UI issues sounds like the early usability testing that I do with TechSmith Morae, yet the 1995 results gave these activities very different effectiveness ratings.

For details, see the academic article by Mark Keil (Customer-developer links in software development). Educational publishers typically require a fee for access.

These methods also relate to research I’m doing on epistemology of usability analysis.

Rigid UCD methodology fails?

I received an e-mail from someone at the 2008 IA Summit about Jared Spool’s declaration that UCD is dead:

——Forwarded message——
From: P
Date: Sun 13/04/2008, 2:54 PM

Hi Jerome,

I’m at the iA Summit in Miami right now, and hearing about all of the things that are going on makes me think of you. One of the interesting sessions was Jared Spool’s keynote speech. He conducted research into what makes certain companies better able to produce effective designs. He used this model to talk about the various approaches departments do to facilitate design:

Things that facilitate design

He said all design involves a process, whether it’s been formalized or not. Interesting, though not surprising: companies that have dogmatic UCD leadership or use a rigid UCD methodology are unlikely to create anything innovative. To innovate, you want to apply techniques in sometimes surprising ways to solve problems that they were not intended for (those are the “tricks”.)

OK, I’m going back out in the warm (hot!) weather.

– P

Of course, the lack of process doesn’t guarantee innovation, either, nor does it guarantee you’ll be able to repeat your (accidental) successes. I believe a successful design process must involve some form of generative design—as Five Sketches™ does—that’s based on knowledge of user condition. I also beleive that, once you’ve internalised those two things, you can use almost any form of facilitation to design good products.