Menu

Last week we launched an intro to user testing: why to do it, and why it should be an ongoing process of improvement. Now it’s time to get more specific about the types of user testing and how to select the best ones to meet your objectives, budget and timeline.

Although there are a wide variety of user tests, they can be divided into two main categories: qualitative methods (where users tell you what they want) and quantitative methods (where you measure how users actually engage).

You could consider these categories the art and science of user testing.

Qualitative methods: the art of user testing

Qualitative testing is all about listening. It helps you gain insight into users’ attitudes, desires, and objectives, as well as their reactions to ideas like a proposed design. At Hanson, we usually perform qualitative testing prior to or during the early stages of development, using a small sample group.

Qualitative methods for gaining insight into user preferences include:

Advisory Boards are small groups of selected representatives who can describe the issue(s) that you’re trying to solve and the desired future state. These typically include some stakeholders and may or may not include end users, depending on the particular project. This option is usually the fastest and least expensive type of user research to perform, but it runs the risk of not having enough (or any) end-user input in order to ensure that you’re meeting the real needs of the project. It’s often used in conjunction with other research methods and may be the first step of a project, helping you understand the goals and reasons for undertaking the project.

Focus Groups are small groups of end users who are asked questions in a group setting in order to gain user feedback about the project. The cost and timing for this type of research often depends on how easy it is to access end users and get the sessions arranged. Typically focus groups are completed over the course of a day or two, with the results analyzed shortly thereafter.

In Field Research and Observation, you go to the end users to ask direct questions and observe them in the actual environment where the issue you’re trying to solve/proposed solution would take place. This allows you to gather more first-hand knowledge and identify additional factors that may not come out in meetings or interviews. The cost and timing depends on how easy it is to access the end users in their environment and how many visits are needed.

In-Depth Stakeholder/User Interviews are meetings conducted either one-on-one or in very small groups of 2-4 users with similar roles in order to gather those data points separately and then analyze them at the end to determine overarching themes. Timing is subject to the interviewees’ schedules, but the extra time is usually worthwhile since it allows for more users to have direct involvement.

Qualitative methods can also be used to observe user behavior:

In Task Analysis, you identify specific tasks and then ask users to walk you through the process they use (or wish to use) to complete that task.

In Scenario-Based Prototype Testing (local or remote), you show a proposed solution and ask users to walk through specific scenarios in order to determine whether it meets their needs. Low-fidelity prototypes are helpful for testing preliminary site information architecture, taxonomy, and content, while high-fidelity prototypes are helpful for testing the visual design and planned interactions on the site.

In a Cognitive Walkthrough, you walk through the system yourself with a set of questions about what you want to accomplish, and use that to ascertain how well the proposed solution can perform. While this solution is typically less expensive and faster than scenario-based prototype testing, it may be riskier because it’s not real end users testing the proposed solution.

An Expert Review / Heuristic Evaluation is a walk-through of the proposed solution by a user experience expert who specifically checks to ensure that it meets identified best practices for user experience design.

Quantitative methods: the science of user testing

Quantitative methods are all about gaining reliable measurements on user behaviors and patterns-of-use. At Hanson, we often use these methods early on to test a proposed solution or get feedback about an existing solution. But we also use them at the conclusion of a project and as part of our ongoing testing for improvements. Most are conducted using a large sample group.

First up, quantitative methods for gaining insight into user preferences:

Online Surveys are a quick and easy way to get feedback from a larger group of people. Ideally they include a set of yes/no questions but also give users the opportunity to provide additional feedback in order to maximize your understanding of their responses.

Competitive Benchmarking is typically performed at the beginning of a major project and then periodically in order to understand the content and tools that competitors are providing to their end users. In some cases those solutions also can be tested to determine what is or isn’t working for your target users.

Card Sorting is primarily used to gain feedback on your information architecture; it allows users to either react to a proposed structure or to create their preferred structure for available content. This testing can be performed in person or online.

Then there are the quantitative methods for testing user behavior:

Heat Mapping lets you visually see where users click in order to better understand how they navigate. Various software packages let you do this type of testing on live sites as well as on concepts.

Web Analytics let you measure the performance of a solution after it goes live, by allowing you to see which pages users visit, how they get to various pages, where they enter and leave the site, as well as other key performance indicators defined as part of the solution.

A/B or Multivariate Testing allows you to split site visitors into groups to test two or more solutions and see which performs better for the specified task. Key performance indicators can reveal which is most successful, providing hard data on which design to implement.

You can’t really do too much testing. For most projects, a combination of qualitative and quantitative testing is invaluable, and testing all along the way will keep you aligned to user needs as you develop (and afterwards, as you continue to optimize). So let your objectives, timeline and budget guide your test planning.

Coming Up Next: Tips for conducting user testing

Now that you’re familiar with the art and science of testing methods, it’s time to decide how to conduct your testing. What’s the difference between local and remote testing? When is a usability lab required? And how can you make the most of online testing services? We’ll go deeper into these questions and offer a testing example from our own experience in the final post in our series, coming next week.

Welcome to the Treehouse!

"Welcome to the treehouse,” said CEO Steve Hanson the day we moved into our new headquarters. On an upper floor at the edge of the woods, we now enjoy a view from the trees. We think of this space not as a lofty perch but as a place to gather and seek a broader perspective. The same thing applies to our blog.

Join the Conversation