e-consultancy’s recent report ‘Reducing Customer Struggle 2012’ undertaken with nearly 500 senior digital managers shows that most don’t understand the mobile user experience. 82% felt their knowledge was OK, poor or very poor. The implications of this are:
- Poor conversion rates
- Low levels of user satisfaction
- Low net promoter scores
- Damage to brand value and reputation
There are now lots of ways gaining insights into the user experience: Quantitative analysis such as analytics show where there are problems. Monitoring calls to the customer service team, social media analysis, social listening, online feedback forms and customer surveys will tell you what users are frustrated about but usually won’t reveal what’s causing the problems.
To understand why problems are happening on web sites and apps, it’s necessary to undertake user research that focuses on behaviours not attitudes - what people say they do is often different to what they actually do. Watching real users interacting with a site or app (both in development and live) is the only way to understand what the issues are that interfere with a good user experience; it also forces those with responsibility for the site or app to ‘walk in the users’ shoes’ – often an unsettling experience that can encourage a Damascene conversion to a user centred approach. And usability testing is still the best method for understanding, in detail, users’ behaviours because you can observe these.
Usability testing comes in three main ‘flavours’:
- Unmoderated online testing - where testers undertake testing on a site and respond to a series of pre-defined questions before, during and after the testing
- Unmoderated user videos - where testers (usually from a panel) are given a task and then asked to ‘think out loud’ whilst using a site and the session is recorded for subsequent analysis
- Moderated lab & remote testing – where users undertake tasks using think aloud protocols in moderated research either in a lab or remotely using desktop sharing software
Unmoderated research is usually cheaper; however, it requires questions to be pre-defined – whereas often the most insightful questions are prompted by testers’ behaviours during a research session and are not anticipated in advance. Also, if necessary, moderators can ‘guide’ testers past barriers (once the usability problem has been made clear) to enable them to work through a process in its entirety. Furthermore, online research relies on self reporting – and, as pointed out earlier, what users say they do is often different to what they actually do.
So, what’s the best approach? Web Usability argues that if you want to find out why problems happen on websites and apps then bespoke moderated usability testing (yes our type!) is still the best approach. This uses:
- testers who have been specifically recruited fit the client’s required profile and so are real-life users – not ‘professional’ testers
- skilled moderators who adopt a user led approach to facilitation and are able to explore issues (often unexpected) as they arise, probe on why testers do things and their emotional reactions to the site
- eye tracking equipment so we can see what users do and don’t look at; whilst we don’t recommend heatmaps, ‘liveviewer’ adds enormously to the insight into how users look at websites and apps
- labs and observation studios so clients can observe the testing in real time - and ask additional questions 'on the hoof' prompted by the testing outcomes and not anticipated in advance
But knowing what the problems are is only half the answer. Identifying the right actions and getting them implemented is the difficult bit! Some criticisms of lab based usability testing are that it ‘involves numerous meetings and produces fat reports that nobody reads’. Well, not the way we do it.
As well as doing rigorous research, we focus on the ‘process’ of getting actions implemented. Our experience in change management consultancy means we have the understanding and skills to do this. Just writing a report will often fail to bring about action because individuals don’t accept the results, or there is no ‘shared understanding’ of these, or recommendations may not ‘fit’ the client context. A key element of our approach is to encourage all those who can influence the implementation of the research outcomes to attend at least some of the research and a discussion facilitated by Web Usability. This discussion facilitates the process of ‘sense-making’ enabling the development team to take ownership of the research results, develop a collective view of the issues to be addressed, and agree the appropriate actions in the light of the client’s resourcing, technological, and political constraints. It also minimizes the need for subsequent meetings.
And our testing is impartial. We don’t develop sites so we don’t have an axe to grind – more than that, we will challenge solutions and concepts which we feel will increase customer struggle or not meet our clients’ aims for their site