For professional reasons, I like to mess around with software. It’s a form of training, because some of the messing around leads to frustration, confusion, and inefficiency. And that’s good.
My hope is that my experiences will help me to better understand what I put various groups of software users through when they use the software I helped design and build.
An easy way to mess around is by changing default settings. For example, my iTunes isn’t set to English. This helps me understand the experience of users who learned one language at home as children and now use another language at work as adults. It’s not just beneficial to experience the initial pain of memorising where to click (as I become a rote user in a GUI I cannot read), but also the additional moments of frustration when I must do something new—an occasional task whose command vector I haven’t memorised.
Another easy way to mess around is to switch between iMac and Windows computers. It’s not just the little differences, such as whether the Minimise/Maximise/Close buttons are on the left or right sides of the title bar, or whether that big key on the keyboard is labelled Enteror Return.
It’s also the experience of inefficiency. It’s knowing you could work faster, if only the tool weren’t in your way. This also applies to successive versions of “the same” operating sytem. This is the frustration of the transfer user.
It’s noticing how completely arbitrary many design standards are—how arbitrarily different between operating systems—such as the Endkey that either does or doesn’t move the insertion point to the end of the line.
Another easy way to mess around is to run applications in a browser that’s not supported. I do it for tasks that matter, such as making my travel bookings.
All this occasional messing around is about training myself. The experiences I get from this broaden the range of details I ask developers to think about as they convert designs into code and into pleasing, productive user experiences.
I’ve been waiting for a full-screen touch UI with haptic response. That is, if the application displays a button on the screen, when I push it with my finger, I want to feel it clicking. Similarly, when I nudge an object, I want to feel its edge on screen.
Imagine the challenges in designing for the kind of hardware depicted in the video, above! It doesn’t exist, yet, but I’m ready. I can also imagine haptic icons on mobile-phone handsets, because I heard researchers present their research at the University of British Columbia, a few years ago. I can imagine the responsibility and the pressure of being the first to market with haptic icons. The market leader will get to define what OK feels like and what Cancel feels like, for years to come. The idea is that mobile phones should touch you back, with haptic icons, at the place where your thumb typically touches the handset. Phones could signal—through touch—that you have a call waiting. Similarly, a camera could recommend—through touch—that it’s focused and ready to shoot. Outside the world of portable electronics, a doorknob could signal—through touch—that there are already three people in the room, and a steering wheel could alert you—through touch—that your car needs refuelling.
Most of us will only get to decide which kind of haptic cue we want a screen to convey during a particular physical interaction between the user’s fingers and a screen. But I’m eagerly awaiting that technology. There are bound to be many common computing tasks where a finger can outperform a mouse.
Over the past month, I’ve come across the same discussion several times: “When designing a website or product, do you use wireframing or prototyping?”
The first part of my answer is: “Make sure you sketch, first.”
At the design stage, sketching, wire-framing, and prototyping are not equal. Sketching is useful at the divergent phase of design because it lets the design participants express and capture lots of different ideas quickly and anywhere that pen and paper will work. Nothing is as fast as running a pen across a sheet of paper to capture an idea—and then another, and another. And since sketching is intentionally rough, everyone can do it.
Responding to the problem statement, first saturate the design space with lots of ideas, and then analyse and rapidly iterate them to a design solution.
I also believe sketching is great for the convergent phase of design, but there are potential hurdles that design participants may encounter. It can be challenging to convey complex interaction, 3D manipulation, transitions, and multi-state or highly interactive GUI in sketches without learning a few additional techniques. This is unfortunate, because having to learn additional techniques reduce the near-universal accessibility of sketching.
The second part of my answer, therefore, is that “if you need to learn additional techniques to make sketching work, feel free to choose wireframes or prototyping as alternatives when there are compelling reasons to do so.”
I should point out that the three techniques—sketching, wireframing, and prototyping—are not mutually exclusive. Wireframes and paper prototypes can both be sketched—especially for simple or relatively static GUI designs.
There are no validity concerns with the use of low-fidelity sketches, as these readings show: