Recently, we asked some of our coworking group and other friends if they'd be willing to participate in a usability test conducted by another user-centered design consultancy. That spurred a whole, interesting conversation about what a usability test is, what we can learn from one, and when a project warrants conducting a usability test.
There are a multitude of ways we user-centered designers gain insight into how people use websites. Mature, experienced designers should be open to finding out they don't have all the answers - but that users DO. These usability activities are the special magic behind great web design. Some of our favorite usability tools are:
- Heuristic evaluation. Measure a proposed design against a set of web design rules of thumb, or web design best practices.
- Contextual inquiry. Study how users work and perform key tasks while in their "native environment." For example, before redesigning an intranet reference system for customer service reps, we want to know about the rep's cheat sheets, post-it notes, bookmarked websites, phone directories, and other job aids, and how they interact.
- User personas. We frequently create fictional representative users of various types, and use those to check the design. For example, Jolie, our young, internet-savvy college student might be fine with a design, while Frank, the retired Colonel who struggles with e-mail will have a more difficult time - and both need to be able to successfully complete a key task.
- Task analysis. We look extensively at how and when every bit of information is transferred during a task, whether the information is moving from a database, a memory, a piece of paper, a post-it note, or a website. We consider physical aspects, environment, and time as well, like having to walk to a printer, or digging the credit card out of a wallet or purse.
- Informal usability study. Sometimes it's enough to just show the person sitting next to you a sketched layout of a screen and ask, "If you were trying to order tickets on this website, what would you do first?" or "This is a restaurant review website. How would you find reviews for Tre Trattoria?" Speaking from my own experience, the hardest part of this type of test is shutting up and letting the person find his or her own way, or to fail.
- The full web usability test. Conducted in a usability lab, with a controlled environment, one-way mirrors, videocameras recording the mouse, keyboard, user's face, and the screens.
Here's what a usability test looks like. We're seeing the participant from the control room through one-way glass; we can see her but she sees a mirror. We are able to guide her and answer questions. Commonly, a usability specialist sits with the participant.Photo courtesy Sun Microsystems.
In web design projects, it's very helpful to watch people try to complete specific tasks using a prototype of a proposed design. A screen that the designer feels is perfectly clear in its layout and labeling can be baffling to a user, and there's no better way to help the designer understand that than for her or him to watch users attempt to perform the task.
For example, when I'm designing an airline ticketing system, I analyze the users' needs and the business requirements, then mock up what I think is a great set of screens to allow users to order tickets. I probably did a great job. But I can't fix problems I'm unable to anticipate, and that's where a usability test comes in. Here's the drill:
- Design the test. The usability specialist works with the design lead to design a usability test. We discuss the goals of the test, the users and tasks, the desired task flow, possible alternative task flows, and criteria for success.
- Recruit participants that match our criteria.
- Schedule participants into the lab.
- "Freeze" the design for the test, and provide a prototype. This prototype can be the actual code, mockups of screens, or a sketch on paper. A great usability specialist can get valid results using any of these artifacts, as long as they have the basic elements needed for the user to make decisions.
- Conduct the test. The sessions are recorded. The usability specialist walks the participant through the task without leading them (which would invalidate the results). The designer and other team members can watch behind the one-way mirror. Occasionally we may ask questions or clarify something about the task through the intercom, but it's best to remain silent and just learn.
- Document and analyze the results. After all participants have been through the test, the specialist tabulates and analyzes the results. How many participants got all the way through the tasks? Where did they stumble, and why? The usability team pores over videotapes and discusses what happened, looking for patterns.
- Write the report. It's great if the designers can attend sessions, but budgets often don't allow for that. The usability specialist must draw conclusions, support the conclusions with data, and make specific recommendations for improvement.
As a designer myself, I have watched - helplessly - behind the one-way mirror while a user missed the large and obvious (to me) "Order Now" button. As web professionals, it's very easy to lose touch with how it feels to be a novice user. The more experience we have, the harder we have to work at understanding "beginner mind" on any web interface or application. Watching actual users use what you've designed is invaluable experience. Sometimes quite painful, though!
Full usability tests are time consuming to plan, conduct, and report on, and in order for the results to be valid, require the services of a usability expert. The specialist who conducts the test (usually sitting beside the participant) has to be a special kind of person, one who can patiently wait through user frustration, withhold information with a poker face, and not lead the participant in any way.
If you're getting the idea that usability tests are expensive and time-consuming, you're right. At Firecat Studio, we only recommend a formal usability test when the alternative methods can't tell us what we need to know, and when we do not have the luxury of trial and error.
Alternatives to Full Usability Tests
In many cases, we can conduct informal usability tests using paper prototypes or quick mockups and uncover the majority of usability problems with much less time and expense.
Ship, Then Test
A perfectly valid approach is to "ship, then test" - put the site out there, measure users' actual success with tasks, and make adjustments to the interface after launch. Websites aren't like books - they're meant to be reworked, to evolve and grow as you learn more.
Sometimes skimping on usability is penny-wise and pound-foolish, though. A full usability test is in order when:
- Designing complex, custom-programmed interfaces, where reprogramming costs would be extensive.
- Inventing a new type of interface, one for which a widely accepted user interface pattern doesn't apply.
- Time to market is critical and reprogramming or redesign would take longer than the test process.
- A usability problem could result in a highly negative brand experience. This one's a judgment call - any usability problem impacts brand, even on screens used behind the scenes. Some black eyes are worse than others, though.