This month's UX (user experience) magazine (published by the Usability Professionals' Association) contains some interesting articles on remote user testing and its advantages over lab based testing. One of the principal advantages is the ability to get large numbers of users to test a site, often several hundred, for similar costs to much smaller lab based research. What was particularly striking to me was that much of the feedback obtained using these approaches is based on users' attitudes e.g. Was this task: very difficult'[5 point scale]'very easy?
Now, while capturing user's attitudes is useful feedback for some issues, attitudes are notoriously bad at providing reliable feedback on how easy it is to find information on a site. At the conclusion of a usability testing session we routinely ask users to score the website on a number of dimensions, one of which is 'How easy was it to find information?' Often users will give quite high scores even when they have failed to complete any of the tasks that have been set.
Also different users will often give the same site very different scores. We offer them a 10 point scale and scores can very by 5 points or more. So clearly, users cannot provide an objective measure of the effectiveness of a site's navigation, only their perceptions of it, otherwise they would all give a site the same score.
However, users' perceptions of a site are affected by a whole range of factors including their familiarity with the web and sites similar to the one under test, their sex (women tend to score sites higher than men), their personalities (intolerant men, like me, score sites lower than tolerant men), the number of search strategies available to them, etc., etc. Now, when you are observing users in the lab a lot of these factors are obvious and you take them into account during your review of the user' experiences on the site. Often the observers will conclude that a site is very poor at getting users to their goals whilst the user is happily saying it is fine.
So what happens when you are remote user testing? How do you know that the user's assessment of how easy it was to find things is reliable? Our experience suggests that users' attitudes are of very little value in determining the effectiveness of a site's navigation: other methods are required that look at user behaviours, such as lab based testing, AB testing etc. Users' attitudes can provide very valuable feedback to guide other aspects of a site's development (e.g. what are their goals, does the site content meet these goals, what other information do they want) but can be an unreliable indicator of how easy a site is to use.