Train yourself in frustration, confusion, and inefficiency

For professional reasons, I like to mess around with software. It’s a form of training, because some of the messing around leads to frustration, confusion, and inefficiency. And that’s good.

My hope is that my experiences will help me to better understand what I put various groups of software users through when they use the software I helped design and build.

An easy way to mess around is by changing default settings. For example, my iTunes isn’t set to English. This helps me understand the experience of users who learned one language at home as children and now use another language at work as adults. It’s not just beneficial to experience the initial pain of memorising where to click (as I become a rote user in a GUI I cannot read), but also the additional moments of frustration when I must do something new—an occasional task whose command vector I haven’t memorised.

Relating to the language challenges that some users face

Another easy way to mess around is to switch between iMac and Windows computers. It’s not just the little differences, such as whether the Minimise/Maximise/Close buttons are on the left or right sides of the title bar, or whether that big key on the keyboard is labelled Enter or Return.

Switching between operating systemsIt’s also the experience of inefficiency. It’s knowing you could work faster, if only the tool weren’t in your way. This also applies to successive versions of “the same” operating sytem. This is the frustration of the transfer user.

It’s noticing how completely arbitrary many design standards are—how arbitrarily different between operating systems—such as the End key that either does or doesn’t move the insertion point to the end of the line.

Another easy way to mess around is to run applications in a browser that’s not supported. I do it for tasks that matter, such as making my travel bookings.

All this occasional messing around is about training myself. The experiences I get from this broaden the range of details I ask developers to think about as they convert designs into code and into pleasing, productive user experiences.

In a separate IxDA discussion thread, a few people reacted to this blog post:

  • Try a Dvorak keyboard instead of a Qwerty keyboard (Johnathan Berger).
  • Watch children’s first use of a design (Brandon E.B. Ward).
  • Use only the keyboard, not the mouse (CK Vijay Bhaskar).
  • Sit in at the Customer Support desk for a day (Adrian Howard).
  • Search Twitter to find out how people feel about a product (Paul Bryan).

See also the comment(s) below, directly in this blog.

The future is haptic, right?

I’ve been waiting for a full-screen touch UI with haptic response. That is, if the application displays a button on the screen, when I push it with my finger, I want to feel it clicking. Similarly, when I nudge an object, I want to feel its edge on screen.

A video about haptic feedback – technology that touches back.

Imagine the challenges in designing for the kind of hardware depicted in the video, above! It doesn’t exist, yet, but I’m ready. I can also imagine haptic icons on mobile-phone handsets, because I heard researchers present their research at the University of British Columbia, a few years ago. I can imagine the responsibility and the pressure of being the first to market with haptic icons. The market leader will get to define what OK feels like and what Cancel feels like, for years to come. Click to read about piezo-based skin-stretch display The idea is that mobile phones should touch you back, with haptic icons, at the place where your thumb typically touches the handset. Phones could signal—through touch—that you have a call waiting. Similarly, a camera could recommend—through touch—that it’s focused and ready to shoot. Outside the world of portable electronics, a doorknob could signal—through touch—that there are already three people in the room, and a steering wheel could alert you—through touch—that your car needs refuelling.

Most of us will only get to decide which kind of haptic cue we want a screen to convey during a particular physical interaction between the user’s fingers and a screen. But I’m eagerly awaiting that technology. There are bound to be many common computing tasks where a finger can outperform a mouse.

Sketch, wireframe, prototype

Over the past month, I’ve come across the same discussion several times: “When designing a website or product, do you use wireframing or prototyping?”

The first part of my answer is: “Make sure you sketch, first.”

At the design stage, sketching, wire-framing, and prototyping are not equal. Sketching is useful at the divergent phase of design because it lets the design participants express and capture lots of different ideas quickly and anywhere that pen and paper will work. Nothing is as fast as running a pen across a sheet of paper to capture an idea—and then another, and another. And since sketching is intentionally rough, everyone can do it.

divergence-and-convergence

Responding to Œ the problem statement, first  saturate the design space with lots of ideas, and then Ž analyse and rapidly iterate them to  a design solution.

I also believe sketching is great for the convergent phase of design, but there are potential hurdles that design participants may encounter. It can be challenging to convey complex interaction, 3D manipulation, transitions, and multi-state or highly interactive GUI in sketches without learning a few additional techniques. This is unfortunate, because having to learn additional techniques reduce the near-universal accessibility of sketching.

The second part of my answer, therefore, is that “if you need to learn additional techniques to make sketching work, feel free to choose wireframes or prototyping as alternatives when there are compelling reasons to do so.”

I should point out that the three techniques—sketching, wireframing, and prototyping—are not mutually exclusive. Wireframes and paper prototypes can both be sketched—especially for simple or relatively static GUI designs.

There are no validity concerns with the use of low-fidelity sketches, as these readings show: