Card sorting is a well-established technique for figuring out how to classify and label information so that it’s easier to find. It’s a great way to gather insights about the nature of the content and your users’ mental models.
However, while card sorting can help generate an information architecture (IA), it doesn’t guarantee that content is easy to find on your website. Card sorting helps figure out ‘what should go together,’ but the results from a card sort usually require substantial massaging to form an IA and that IA still needs to be proven to work.
The main issue is that people process information differently when performing a seek task (‘find the purchase order form’) as opposed to a sort task (‘sort all of these items into groups that make sense to you’). When in sort mode we are deeply evaluative, applying considerable effort to organize ideas in a coherent manner. In seek mode, we skim through content, readily discarding information we don’t need and selecting quickly when we think we’ve found something – a pretty close approximation of our web browsing habits!
An ideal process would be to generate an IA by asking respondents to work in an evaluative ‘sort mode,’ and test what they come up with by asking a different set of respondents to perform seek tasks.
With this in mind, tree testing aims to get as close as possible to the actual experience of navigating a website while remaining ‘pure’ about testing the IA independent from the visual design, navigation design and page layout.
Tree testing participants are given a task to find something using a proposed IA. Every step users take is then recorded for your analytical pleasure. Did users find the right page? Did they take any wrong turns? How long did it take them?
The tree testing results provide a wealth of information that can be used to identify problem areas in the IA. Tree test analysis is still relatively labour-intensive, but the data is more conclusive and easier to interpret when compared to card sorting. The ability to deliver a conclusive result also helps to overcome project politics. For example:
“When asked to download a purchase order form, forty percent of participants incorrectly started in the products and services section. Although some of those participants found the correct destination eventually, fifteen percent of the total participants never found the form.”
That kind of data is compelling and actionable!
Unlike full usability testing, tree testing only deals with the IA. But because you’re only testing a site structure, you can quickly iteratively and refine the IA at minimal cost.
Getting started with tree testing
The following advice draws upon our experience with client projects and with helping Treejack users around the world to get the most from their tree studies.
One: Task authoring matters. A lot. Don’t ask your participants to “Find XYZ” twelve times in a row. You’ll see the boredom reflected in your results: a high skip rate and plenty of illogical responses. Mix it up a little and create real-world scenarios. If necessary, ask your participants to “imagine” or “suppose” that they are coming at it from a certain perspective.
Never use the same language in your task description as a label in your IA. As an example, if you ask participants to find a form, any label with the word ‘form’ in it will experience undue attention. Try and think of another way to phrase the task.
Two: Don’t bother testing your entire IA. Focus on the parts that matter and that you are unsure about. If you write a task to test your “Contact Us” page, you’ve just wasted the precious attention of your participant, which could have been used to test something peculiar to your site. It’s not worth your time to verify something everybody’s going to be familiar with. This advice also goes for loading up your tree (the IA itself). Use discretion here, but in most cases you can probably leave out the really common ‘boilerplate’ navigation items.
Three: This isn’t a marathon. Ask your participants to complete ten to fifteen tasks. You might have thirty or more tasks you want to test, but for each participant you’ll want to ask them to complete an achievable subset. We recommend collecting 40 or more responses to each task. This means for 30 tasks displayed at 10 per user you will need 120 participants to complete your survey.
Four: Ask questions! We’re always here to help. Email email@example.com.
By Andrew Mayfield , CEO of Optimal Workshop, Optimal Usability’s sister company which creates online tools for UX professionals. He helped to create the world’s first online tree testing tool, Treejack, back in 2007.