Modernist design: Beyond flat and simple

A decade ago, the big players in software adopted modernist design for their user interfaces. With this redesign, digital came of age, embracing a look and feel that’s no longer bound by last century’s conventions or bound by the inexperience of those new to computing.

Modernist user interfaces were intended to focus people on their current task, supports fast-paced use, and embraces the fact that the interfaces are digital.

At first glance, modernist interfaces were simple and flat when compared to older graphical user interfaces. But making designs that are simple and flat didn’t always work. Designers who didn’t understand the principles behind modernist interface design made uninformed choices that hurt their customers.

Continue reading “Modernist design: Beyond flat and simple”

After you publish, test and review

As a consultant to business and government, I know my clients often see publishing as a project that has a start, middle, and end. Once they’ve published their app, their data, or their text and media, they often express relief that the job is finished and then want to forget about it.

The thing is, they’re not finished. If the app, data, or content is mission critical, they must not simply forget about it.

To illustrate what can happen when people publish and forget, here’s a simple story. It’s from the perspective of a customer or “user” of some material that was published by others. In this story, I am one of those customers.

Continue reading “After you publish, test and review”

Would you have designed it that way?

In my day-to-day life, I often think about design problems as I encounter them. I find myself wondering about information that I don’t have—details that would help me solve the problem I noticed. And I wonder: faced with the same constraints, would I have come up with the same solution? Here’s one I encountered.

Passengers waiting to board a ferryLast week, some friends wanted to visit their family on an island. Where I live, people use ferries to get to travel between various islands and the mainland. At times, I’ve made the crossing on foot, by bus, or by passenger car. The choice might depend on the size of our group, how far we’re going on the other side, how much we want to spend, what time of day and year we’re travelling. On busy days the ferries fill to capacity, and traffic reports may announce “a 1- or 2-sailing wait” between points. From time to time the media discusses changes to ferry service, prices, and ridership. All in all, there are a lot of factors influencing the deceptively simple question: “When I get to the ferry, will there be space for me on board?” The question could also be: “Can I avoid waiting in line?”

The ferry company’s website answers this question in a seemingly fragmented way, and that got me thinking: why was the answer fragmented, and what user needs was the website’s current design meeting? The ferry company segments its audience by mode of travel. This segmentation is logical for an audience motivated by cost, because a ferry passenger on foot pays less than a ferry passenger in a car. But when other decision-making factors are more important than price—such as space availability—segmenting users by mode of travel might not be helpful.

Can I avoid waiting?

The friends I mentioned earlier had all the time in the world to get to their family on the island. But they didn’t want to wait in line for hours. Finding the answer to “is there space for us, or will we have to wait” is complicated because the answers seem to be organized by mode of travel on different pages of the website. Here’s a reproduction of one of the first “is there space for me” answers I found on the website:

Is there space on the ferry?

Given the question, the above screen may not be clear. What is deck space? And—look closely at the orange bar—how much deck space is available? Is it zero or 100%? Is a reservation the same thing as a ticket? Does everyone require a reservation to board?

Here’s another way to present the same information, this time making it clearer that a driver’s willingness to pay more may influence wait time:

No reserved spaces on the ferry

Now it’s clear that this information about availability only applies to vehicles that want a reservation. That means foot passengers, bus passengers, and cyclists still don’t have an answer to the “will we have to wait” question. From experience, frequent travellers already know part of the answer: passengers on foot almost never have to wait, but occasional travellers and tourists wouldn’t know this. And travellers with vehicles may wonder about alternatives, because leaving the car on shore and boarding on foot could put them on an earlier ferry. The answer to “can we avoid waiting” may require a comparison of wait times for each mode of travel.

Here’s another way to present the information, this time listing more modes of travel:

Different types of space on the ferry

The above screen answers the “can we avoid waiting” question more clearly. In addition to providing greater certainty for some modes of travel, it also meets the (presumed) business need of generating revenue by selling reservations.

Design questions, but no answers

It’s easy to theoretically “solve” a design problem that we encounter, but there are always unknowns.

  • Is there really a design problem? How would we know?
  • Would this design have been technically possible?
  • Would this design have been affordable?
  • Would this design have met the needs of many users, or only a few?
  • Would this design have been ill received by customers or interested groups?
  • and so on….

So if you can’t know all the answers, why bother with the exercise? Because it’s what we do, in our line of work.

The trigger for this exercise

Here’s an excerpt of the screen that inspired this post.

Excerpt of the original screen

When a user interface is for using—not for understanding—a product

The purpose of a user interface is not to explain how a product works. Instead, the interface is to help people use the product. Here’s an idea: if someone can use your product without understanding how it works, that’s probably just fine.

What model does the user interface reflect?

Models are useful to help people make sense of ideas and things.

  • An implementation model is how engineers and software developers think of the thing they’re building. It helps them to understand the product’s inner workings, the sum of its software algorithms and physical components. For example, a car mechanic has an implementation model of combustion engines.
  • A mental model is how someone believes a product behaves when they interact with it. It helps them to understand how to use the product. For example, a typical car driver has a mental model of pressing the accelerator pedal to go faster and pressing the brake to slow down. This mental model doesn’t reflect how the car is built—there are many parts between the gas pedal and its spinning tires that typical drivers don’t know about.

The implementation model and the mental model can be very similar. For example, the mental model of using a wood saw is that “The saw makes a cut when I drags it back and forth across the wood.” This overlaps with the implementation model. In addition to the back-and-forth user action, the implementation model also includes an understanding of how the saw’s two rows of cutting edges—one for the forward stroke and one for the backward stroke—help to cut the wood fibers, break the cut fibers loose, and then remove the fibers from the kerf, and whether the saw’s tooth shape is better for cutting fresh wood or dried wood.

The mental- and implementation models can overlap, or not

The implementation model and the mental model can also be very different. Let’s consider another example: getting off a public-transit bus. The mental model of opening the exit doors is that “When the bus stops, I give the doors a nudge and then the doors open fully.” The implementation model of the exit doors is that, once the bus stops and the driver enables the mechanism, the exit doors will open when a passenger triggers a sensor. Now consider this: if the sensor is a touch sensor then the passenger’s mental model of “nudging the door” is correct. But if the sensor is a photoelectric sensor—a beam of light—then passenger’s mental model of “nudging the door” is incorrect.

To exit, break the photoelectric beam

Getting bus passengers to break the photoelectric beam was a real-life design challenge that was solved in different ways. In Calgary, public-transit buses use a large, complex sign on exit doors to present a mental model that’s somewhat consistent with the implementation model:

Signage explains the complex implementation modelTO Signage for a simpler mental modelOPEN THE DOOR

      1. WAIT FOR GREEN LIGHT
      2. WAVE HAND NEAR DOOR HERE

In Vancouver, public-transit buses use a large, simple sign on exit doors to present a mental model that’s inconsistent with the implementation model:

TOUCH HERE ▓ TO OPEN

In fact, touch does not open the exit doors at all—not on the Vancouver buses or the Calgary buses I observed. Only when a passenger breaks the photoelectric beam will the doors open. In Calgary passengers are told to wave a hand near the door. A Calgary bus passenger might conclude that the exit door has a motion sensor (partly true) or a proximity sensor (not true).  In Vancouver passengers are told to touch a target, and the touch target is positioned so the passenger will break the photoelectric sensor beam when reaching for the target. A Vancouver bus passenger might conclude that the exit door has a touch sensor (not true).

Calgary bus passengers are more likely to guess correctly how the exit door actually works because the sign presents a mental model that partly overlaps the implementation model: the door detects hand-waving. But does that make it easier for someone without prior experience to exit the bus?

No, it’s harder.

It’s more difficult for a sign to get passengers to hold up a hand in the air in front of the door than it is to put a hand on the door. Here’s why: If you knew nothing about a door that you wanted to open outward, would you place a hand on the door and push? Or would you wave at it? From our lifelong experience with doors we know to push them open. Touching a door is more intuitive than waving at it, and that’s why “nudge the door” is a better mental model and thus an easier behaviour to elicit and train. The simpler mental model improves usability.

Rule of thumb for mental models

When an understanding of a product’s inner workings is unnecessary, staying true to the implementation model risks increasing the complexity of the user interface. Instead, have the user interface reflect a mental model that is simple, effective, and usable.

If you can relate the use of an object to a common experience or simple idea then do so—even if it doesn’t follow the implementation model. It is unnecessary for a system’s user interface to convey how the product was built. The user interface only needs to help users to succeed at their tasks.

No doubt there are cases where a lack of understanding of a product’s inner workings could cause danger to life and limb, or cause unintended destruction of property. In that case, the mental model needs to convey the danger or risk or, failing that, needs to overlap more with the implementation model.

Chip-card usability: Remove the card to fail

Card readerI went to the corner store, made a purchase, and tried to pay by using a chip card in a machine that verifies my PIN. My first attempt failed, because I pulled my card out of the card reader too soon, before the transaction was finished. I should add that I removed my card when the machine apparently told me so.

The machine said: “REMOVE CARD”

And just as I pulled my card out, I noticed the other words: “PLEASE DO NOT”

Have you done this, too…?

Since making a chip-card payment is an everyday task for most of us, I wonder: “What design tweaks would help me—and everyone else—do this task correctly the first time, every time?” Who would have to be involved to improve the success rate?

Ideas for a usable chip-card reader

A bit of brain-storming raised a list of potential solutions.

  • Less shadow. Design the device so it doesn’t cast a shadow on its own screen. The screen of card reader I used was sunk deeply below its surrounding frame, and the frame cast a shadow across the “PLEASE DO NOT” phrase. (See the illustration.)
  • Better lighting. Ask the installer to advise the merchant to reduce glare at the cash register, by shading the in-store lighting and windows.
  • Freedom to move. The device I used was mounted to the counter, so I couldn’t turn it away from the glare.
  • Layout. Place the two lines of text—”PLEASE DO NOT” and “REMOVE CARD”—closer together, so they’re perceived as one paragraph. When perceived as separate paragraphs, the words “REMOVE CARD” are an incorrect instruction.
  • Capitalisation. Use sentence capitalisation to show that “remove card” is only part of an instruction, not the entire instruction.
  • Wording. Give the customer a positive instruction: “Leave your card inserted” could work. But I’d test with real customers to confirm this.
  • Predict the wait time. Actively show the customer how much longer to wait before removing their card. 15 seconds…, 10 seconds…, and so on.
  • Informal training. Sometimes, the cashier tells you on which side of the machine to insert your card, when to leave it inserted, and when to remove it.
  • Can you think of other ideas?

Listing many potential ideas—even expensive and impractical ones—is a worthwhile exercise, because a “poor” idea may trigger other ideas—affordable, good ideas. After the ideas are generated, they can be evaluated. Some would be costly. Some might solve one problem but cause another. Some are outside of the designers’ control. Some would have to have been considered while the device was still on the drawing board. Some are affordable and could be applied quickly.

Making improvements

Designers of chip-card readers have already made significant improvements by considering the customer’s whole experience, not just their use of the card-reader machine in isolation. In early versions, customers would often forgot their cards in the reader. With a small software change, now, the card must be removed before the cashier can complete the transaction. This dependency ensures customers take their card with them after they pay. One brand of card reader is designed for customers to insert their card upright, perpendicular to the screen. This makes the card more obvious, and—I’m giving the designer extra credit—the upright card provides additional privacy to help shield the customer’s PIN from prying eyes. These changes show that the design focus is now on more than just verifying the PIN; it’s about doing it quickly and comfortably, without compromising future use of the card. It’s about the whole experience.

A good hardware designer works with an interaction designer to make a device that works well in its environment. A good user-experience designer ensures customers can succeed with ease. A good usability analyst tests the prototypes or early versions of the device and the experience to find any glitches, and recommends how to fix them.

Drivers on the phone: Misusing the original social network

Researchers have been tracking the use of phones by drivers for several decades. We know that phones reduce driver performance, and that one fifth of motor-vehicle accidents involve mobile phone use. Heavy traffic and stop-and-go traffic compound the risk, because driving in this type of traffic requires more attention. The type of phone use is also relevant. In Japan, dialling and talking while driving was involved in about one sixth of accidents, whereas attempting to locate the phone when it chimes to announce an incoming text message or voicemail was involved in almost half of phone-involved accidents. In addition, laws restricting phone use do little—at this stage—to reduce actual cell-phone use. This research applies not only to you and me, but also to professional drivers.

Can we influence the phone use of drivers?I was in a taxi, earlier this week. Traffic was heavy, so when the driver’s phone rang, I said: “Please don’t answer unless you pull over, first.” The driver decided not to stop and not to answer the phone call. Instead, he attempted to read the incoming caller’s phone number, which involved taking his eyes of the road as he repeatedly glanced at the phone. At the next red light, my taxi driver announced: “I’m just going to use my phone quickly.” He made a hands-free call and was still talking when the traffic signal turned green and we resumed driving. After ending that call, he answered another incoming call, again hands free because we were in a place where it’s against the law to touch the phone while driving.

Incidentally, we know that hands-free phones don’t help driver safety.

Emotional rewards

In the back seat of the taxi, I decided to grin and bear it, because a phone offers a driver more immediate rewards than most fare-paying passengers do.

From time to time, all people—not only taxi drivers—find it challenging to ignore their phones. Mobile phones provide instant emotional rewards when you attend to them: your reward is interaction with your family, friends, colleagues, and business associates. Conversations and messages offer the phone user entertainment, drama, tension, (information about) money, connection, belonging—all manner of emotional reward.

If you see why phones are so rewarding to use, then you understand (part of) the popularity of social-networking sites, as well. Like phones, social-networking sites offer interaction with friends, family, and colleagues, regardless of whether these sites are accessed on traditional computers or mobile and wireless devices.

Service design

With a fifth of traffic accidents related to phone use, it’s worth exploring how to reduce the wrong kind of phone use by drivers.

If our goal is safety, and we assume that safety is a measurable attribute of service design, then what would it take to design safer services by professional drivers? Here are a few ideas.

Change beliefs and opinions. My taxi driver believes he’s an expert driver and volunteered that he’s never had an accident in a decade of driving. In another decade, ad campaigns similar to those against drunk driving might change his mind. For a more immediate effect, driving simulators could help professional drivers learn how phone use affects their driving performance.

Standards and pressure. Someone recently told me that they limit smart-phone use in business meetings with one simple rule that everyone agrees to in advance: You can check your phone messages and email if you read the message out loud, for everyone in the room to hear. When it comes to phone use in vehicles, could the phone report to peers and employers when it is used while driving? Peers can apply pressure and employers can set standards with pay-related and job-related consequences. In North America, some employers already do this.

Technical solutions. Phone networks know when a phone is moving in traffic, from cell to cell. In addition, smart phones have GPS—so they know when they’re moving on the road. Phone companies could offer a soft-lock feature that silences the chimes and rings for incoming messages, texts, and calls, and that restricts outbound calls to emergency services while the vehicle is moving. For drivers in the delivery sector and service sector, a not-while-driving soft lock could reduce lawsuit payouts in case of injuries in traffic.

Legal requirements. Distracted driving is against the law in many places. Penalties vary, as does compliance.

These are just a few ideas to kick-start what I believe needs to be a public discussion. What ways can you think of to redesign or influence phone usage by drivers?

Cumulative cost of a few seconds

Currently, I’m on a project team that’s designing, building, and implementing call-centre software. You can probably imagine the call-centre experience from the customer side—we’ve all had our share of call-centre experiences. I’ve been looking at call centres from the other side—from the perspective of the customer-service agents and their employer.

I started by observing customer-service agents on the job. At the site I visited, the agents were using a command-line system, and the agents typed so fast that I couldn’t make sense of their on-screen actions. I signed up for several weeks of training to become a novice customer-service agent. This allowed me to make sense of my second round of observations, and appreciate how efficiently the agents handle their customer calls. It also helped me to identify tasks where design might improve user performance.

Wrap-up choicesFor example, after each call the agent decides why the customer called, and then, by scanning lists of main reasons and detailed reasons, “wraps up” the call, as illustrated. I measured the time on task; the average wrap-up task is nine seconds in duration.

It’s only nine seconds

Nine seconds may not seem long, but let’s make a few (fictitious but reasonable) assumptions, and then do a little math.

If the average call-handling time is five minutes, or 300 seconds, the 9 seconds spent on call wrap-up is 3% of the total handling time. A full-time agent could spend 202,500 seconds—that’s 56¼ hours per year—on call wrap-ups, assuming a 7½-hour workweek and no lulls in incoming calls. Since call volumes vary, there will be times when call volumes are too low to keep all agents taking calls. The customer-service agents have other tasks to complete during such lulls, but if we assume this happens about a third of the time, we need to round down the 56¼ hours accordingly. Let’s choose a convenient number: 40 hours, or one workweek per agent per year.

One workweek is 2% of the year.

Based on this number, a redesigned call wrap-up that takes only half the time would save one percent of the labour. Eliminating the wrap-up entirely would save two percent. That frees a lot of hours for other tasks.

A similar calculation on the cost side (n hours to design and implement changes) leaves us with a simple subtraction. Projected saving minus cost is the return on investment, or ROI. Comparing that number to similar numbers from other projects that we could tackle instead—the opportunity costs—makes it easy to decide which design problem to tackle.

Simpler software leads to greater changes

If a group of users is accustomed to a complex software system, and you’re designing its replacement, how simple can you design that replacement to be? Simplifying software

Simpler software—software that is discoverable, easy to learn, and easy explain to others—frees its users to focus on tasks that add value. It may also frustrate former expert users as they suddenly find themselves less-than-expert users of the new tool.

If the software you’re replacing is so complex that it requires a dedicated group to mop up the errors of other users, then the impact of a very simple replacement is potentially disruptive. The users’ jobs are going to change.

The discipline of change management would advocate using a structured approach to transition individuals and teams to the new software, in a controlled manner, by following a pre-defined framework with, ideally, only reasonable modifications. But for an in-house UX-design team tasked with replacing a complex legacy system in one fell swoop, questions arise:

  • How much workflow change is reasonable? How much process change is reasonable? How much is too much?
  • How much organisational restructuring will a simpler system trigger? Who prepares the company’s other teams for significant change and possibly for job reassignment?
  • Should any inefficiency be deliberately ported from the legacy system into the new system, merely to reduce the scope of change, for the comfort of existing users or to reduce the soft costs of workplace disruption?

In a business environment, where data—numbers—often have more sway than wireframes and design walk-throughs, one way to prepare stakeholders (that is, managers) for significant change is to test the designs and prototypes early and to measure their impact on user performance. User-experience designers and usability researchers then need to communicate their projections far enough in advance to permit the user groups to plan for change.

Natural mapping of light switches

I recently moved into a home where the light switches are all wrong. I was able to fix one problem, and the rest is a daily reminder that usability doesn’t just happen—it takes planning.

Poorly mapped light switches.
The switch on the left operates a lamp on the right, and vice versa. This is not an example of natural mapping.

On one wall, a pair of light switches was poorly mapped. The left switch operated a lamp to the right, and the right switch operated a lamp to the left. The previous resident’s solution to this confusing mapping was to put a red dot on one of the switches, presumably as a reminder. I put up with that for about three days. Continue reading “Natural mapping of light switches”

A banister has multiple user groups

We don’t always know what a design is intended to convey. We don’t always recognise or relate to a design’s intended user groups. But we don’t have to know everything that an object’s design is intended to do, in order to make effective use of the object.

Video showing metal inserts in a wooden bannister.

I imagine the metal inserts in the wooden banister (see the video, above) are detectable warnings for people who are visually impaired, but that’s only a guess. If you watch the video again, you’ll see that the metal inserts do not occur at every bend in the staircase.

Whatever the intent, the banister fully met my needs.