I am a recent addition to the World of Web Usability and quickly getting to grips with the complexities of UX research. One of the first projects I worked on required us to user test a complex financial site aimed at personal investors. As a recent graduate, with a small mountain of debt and minimal income this is, unsurprisingly, an area I have little knowledge of.
While I would be the first to admit my ignorance, our ‘expert’ testers had a harder time coming to terms with the gaps in their understanding.
No one wants to look stupid. We find it hard to admit something we don’t know, particularly in an area we are supposed to be an expert. We feel embarrassed and awkward and do our best to mask our ignorance. In other words, we fake it. This was the problem we faced while testing this site.
The website was chock-a-block with complex financial terminology, concepts and information. The testers we recruited were experienced personal investors, happy to make their own financial decisions, so they should have had no problem getting their heads around the site right? While many asserted their understanding, we doubted how reliable this was and delved a bit deeper, asking testers to explain the concepts to us ignorant folk. Many were unable to do so and, when pushed, eventually admitted those dreaded three words “I don’t know”. This was quickly followed up by sentiments of stupidity and embarrassment: “I feel like a dunce”, “I feel inadequate”.
This is where we fall into our UX Catch 22: We know the site holds information that is not well understood by users, but these same users are unwilling to admit what they do not understand. This makes it very difficult to identify which areas of the site, what terminology or which explanations are obstructing the user’s journey.
In an ideal world, it would be great to simply rely on what a tester tells us. But to do this, we would need to break the habit of ‘Fakery’ and encourage testers to admit ‘I don’t know’ or ‘I don’t understand’ more readily, meanwhile avoiding the associated feelings of embarrassment.
This was the problem throughout this project. Once we delved further into their understanding, by asking questions that may have had apparently obvious answers (aka the idiot questions), tester after tester repeatedly identified the same areas of the site as being difficult to understand, allowing us to confidently recognise which parts of the site needed improvement.
So to wrap this up. As UX consultants, we should not expect the expert testers we recruit to know everything. As I discovered during this testing, if there is something complicated we, as UX consultants, are struggling to understand, chances are, our testers are facing the same dilemma.