We were testing a site the other day for a major retailer that didn't have many of the features that are now considered pretty standard for ecommerce sites. It was not clear what they sold, it had busy cluttered pages, you had to register, there was no persistent basket and much else.
However, by and large, our testers used the site without major difficulty. There were problems, but nothing that would have stopped them completing a purchase. When asked to give their overall impressions at the end of the session they broadly said it was fine and they would use it again.
So what's going on? How can there be such a mismatch between our expert assessment of the site and the testers' feedback?
I think there are a number of factors.
We have learnt how to use poorly designed sites. Everything we do on a website is learnt behaviour. There is nothing intuitive about how to use a web site. The concept of a link and what it looks like is learnt. Where we expect to see navigation is learnt. That adverts can be ignored is learnt (even for useful things that look like adverts but aren't). Also if we use the web a lot we have learnt how to use badly designed sites. Think of the number of times you look for something on a page and it's not where you expect. You then hunt around in various places and, if lucky, find it. Think of the meaningless error messaging you often get when filling in a form. Often it doesn't tell you what the problem is or where it is - bad usability - but you know what to do: you check every field, you try taking out the space in the postcode or telephone number, you add numbers or symbols to your password because you know some sites insist on it but haven't told you. If you can't find something because there is too much text on the page or it's badly laid out you hit ctrl F and search for it. You have learnt coping strategies for bad websites.
As increasingly the people we use to test sites are very experienced, they get round these usability issues, often they don't even know they are doing these things so when we ask them what they think they say "it was fine". It was, but that does not mean it's a well designed web site.
Another factor is the test environment. We bring people into the lab, pay them money and ask them to use a website. It can, in spite of the best efforts of the facilitator, feel like a test of the user, not the website. Most testers want to please, we are British, we don't want to complain, we tend to blame ourselves. All of these factors lead the tester to say nice things about the site
So does this mean usability testing is a waste of time and money? Well no, if you focus on the right things. What matters in these circumstances is what users do, their behaviours. When observing you can see the coping strategies testers use, though this does mean you need an understanding of how much better it could be, so you need a good understanding of best practice in site design. You also often need to ignore what testers say, especially if this contradicts what you have just watched them do. Testers are, in the main, not good at understanding their own behaviour.
So usability testing on its own is not enough to draw meaningful insights. This needs to be combined with a good understanding of best practice in site design and user behaviour so that the right conclusions can be drawn from the user testing.