Web Usability Blog

Why reports won't improve your website

Written by Peter Collins | Nov 16, 2010 4:13:31 PM

"Just conducting a usability test or producing a list of recommendations won't guarantee change in a design". Spool (2003)

The traditional approach to usability consulting is doing the research, perhaps letting the client drop into the viewing suite to see a few testers, and then delivering a report and presentation with recommendations - is being challenged.

Spool went further in 2007 and proposed that usability consultants should "stop making recommendations" and "seek out new techniques for helping client teams focus on user needs".

Absolutely right, this is what WUP has been doing for years! Some usability experts are now acknowledging that making a website more effective needs more than just the delivery of expert recommendations; it needs all the relevant stakeholders in the organisation to buy-in to these recommendations so they will actually take action to make the necessary changes. As Wixen (2003) says "a method that detected 100% of problems would still fail if it did not work within the development process of an organisation". Success "depends on things like whether the method, in its very practice, encourages participation, buy-in, and collaboration by the development team".

So, why won't reports improve your website?

Making the 'right' change happen on a web site is most often about getting the people responsible to accept something is a problem, agree what the problem is, and then agree a solution. To do this, they need to 'own' the problem and the solution. Just writing a report will often fail to bring about the 'right' change for three main reasons:

1. Key individuals may resist change because it conflicts with their view of what's appropriate. A report documenting the user experience is often not enough to stimulate the 'right' user focused change. A document from a usability consultant, however expert, will not convince someone who does not want a particular change to happen that it should: in order to 'own' the problem people often need to see it for themselves, they need 'experiential' knowledge of how the user reacts and feels, and to have their tacit assumptions about the issue challenged.

2. But even observation on its own is not always enough. Everyone who observes a usability testing session will develop their own tacit 'mental models' of what's happening based on their own experiences, values and prejudices. The whole development team (e.g. web project management team, designers, programmers, content providers, marketers, etc.) may have divergent and conflicting views on what needs to be done so don't reach agreement on priority areas for action. The only way to get them to achieve a common view of the issues that will enable them to move forward is to get them to articulate, share and discuss their views in the light of observed user experience research.

3. Recommendations made by external consultants may be impractical or inappropriate to implement in the client context for financial, technical or political reasons. Therefore, because the recommendations don't 'fit' the client context, they don't get implemented.

And so how can the usability consultant be most useful?

Ed Schein is a leading expert on consulting interventions: his key underlying philosophy is that "most of what a consultant does in helping organisations is based on the central assumption that one can only help a human system to help itself" (Schein, 1999); i.e. the role of the consultant is to help the client decide what to do, rather than telling the client what to do. He talks about two extremes of consulting:

  • Expert: where the consultant provides 'expert' knowledge and recommendations; this how most usability consultants operate
  • Process: where the consultant helps the client decide what to do, based on the assumption that the client will always know their organisation better than the consultant, and that sustainable interventions can only be achieved when the client is involved with and owns the definition of the problem, consideration of the diagnosis, and the development of the recommendations.

Focussing on the first one can lead to an outcome which may address the web site issues but does not facilitate agreement about what needs to be done or lead to sustainable change. Purely process interventions, on the other hand, may improve processes but not necessarily lead to improvement in the 'right' outputs. Combining these two modes of consulting enables the consultant to provide the expert knowledge and evidence and facilitate an outcome that 'fits' the client's situation.

Chris Argyris (1970), another researcher on consulting interventions, identifies three basic requirements for an effective consulting intervention:

  • valid information and user evidence to inform decision making i.e. rigorous user testing research
  • everyone responsible for implementing the project needs to be able to have their say about the evidence and the action required
  • the consultant needs to facilitate internal commitment to the project outcomes

So, both these established organisational consultants believe that consultants can add most value to clients by providing relevant evidence and then facilitating an outcome that the client is committed to.

Reflecting these views, Molich (2007), an expert in the usability field, now suggests a number of ways to make usability recommendations more useful and usable including being "aware of the business or technical constraints" and showing "respect for the team's constraints"; these comments imply the need to understand the organisational context, as well as undertaking the usability evaluation and reporting on the outcomes.

Our Approach

The WUP consulting intervention process combines both 'expert' and 'process' elements: the approach involves rigorous user research, facilitates the development of a shared view of the issues through collective 'sense making' (Weick, 1995), and then enables agreement on the appropriate action for that client and its circumstances. The cooperative process helps to ensure that we deliver the appropriate solution to the client, in a way that the client can implement, and that the client assumes ownership and only agrees to a plan that is going to be actionable within the client's resources.

We insist that all those in an organisation who can influence the implementation of the session's outcomes attend our usability testing sessions. During the testing session, observers are asked to record issues using cognitive mapping techniques for subsequent discussion, in order to capture their immediate reactions to the testers' experiences: this also means that everyone has a 'voice' in the discussion, whatever their role or status in the team. Subsequently, a WUP consultant facilitates a discussion of the issues, to identify the priority issues to be fixed and draw out the organisational constraints within which actions need to be framed, be it technical, financial or political. This discussion means that the development team takes ownership of the research results, and develops a collective view of the priority issues to be addressed. They develop a shared view of the issues raised, and agree the implications of these issues, the required actions and the priorities. This bridges the 'knowing-doing' gap and minimizes the chance that the feedback will be 'filed' without action, becoming yet another research report gathering dust on the shelf!

In Conclusion

A report or even a presentation of recommendations will often fail to generate the change that will make a website more effective at satisfying users or the organisational goals. This can lead to feelings of disillusionment and that research was a waste of time and money. User research is always valuable, the challenge for the consultant is helping the client to maximise that value and enable change and improvement.

Web Usability Partnership Ltd
Unit 15,
Lansdowne Court,
Bumpers Farm,
Chippenham,
Wilts SN14 6RZ
Tel: 01249 444 757
Email: info@webusability.co.uk
Web: wup.tiltuat.co.uk 

References

Argyris C. (1970) Intervention Theory and Method: a behavioural science view. Addison-Wesley, Reading Massachusetts.

Molich, R et al, (2007) Making usability recommendations useful and usable, Journal of Usability Studies, Vol 2 Issue 4 August, 162-179

Schein, E.H. (1997) The concept of "client" from a process consultation perspective: a guide for change agents, Journal of Organizational Change Management 10, 202-300.

Schein, E.H. (1999) Process Consultation Revisited: building the helping relationship, Reading, Mass. Addison-Wesley.

Spool, J. (2007) Surviving our success: Three radical recommendations, Journal of Usability Studies, Vol 2 Issue 4 August, 155-161

Weick K.E. (1995) Sensemaking in Organisations. SAGE, California.

Wixen, D (2003) Evaluating usability methods: Why the current literature fails the practitioner, Interactions, Jul/Aug, 29-34