World Usability Day: Human-centred artificial intelligence

Posted by Lucy Collins on Nov 10, 2020 2:59:57 PM
Lucy Collins

On 12th November 2020, we celebrate World Usability Day. The theme this year is human-centred artificial intelligence(AI), which focuses on the potential AI has to increase human productivity:

“If we can design AI which is “reliable, trusted, and safe” we can dramatically enhance human performance in the coming decade.”

There is a very big ‘if’ in this sentence, however. The problem is that often people currently don’t trust AI and it’s going to take a lot to change their mind.

A study by PegaSystems Inc found that “less than half (40 percent) of respondents agreed that AI has the potential to improve the customer service of businesses they interact with, while less than one third (30 percent) felt comfortable with businesses using AI to interact with them.”

Why don’t we trust AI?

For many, this lack of trust comes from a lack of understanding. AI has historically been the playground of programmers and techies. Talks and articles about AI often feel like they are speaking another language and many see AI as it is depicted in the movies – autonomous robots that are going to take over the world from their human masters.

The reality is, the AI of 2020 is not smart enough for world domination, although it can behave in ways its creator didn’t intend. And this can have some undesirable consequences.

Some of these unexpected outcomes are harmless and rather amusing. Janelle Shane has written a whole book about them: “You look like a thing and I love you” – what happens when you ask AI to generate a chat up line for you.

However, there have been some unfortunate examples of when AI has gone really wrong, which hasn’t done much for its reputation. Driverless cars that have caused fatal accidents and job screening algorithms that have proved to be discriminating against female applicants.

It is easy to blame the IA when these things go wrong but really the fault lies with the data they have been fed and the task set.

As IBM Chief Science Officer for Cognitive Computing Guru Banavar, explains, “Machines get biased because the training data they’re fed may not be fully representative of what you’re trying to teach them.”

How we already use AI

And the reality is, even if we are not aware of it, we are already using AI in many small ways.

On our smartphones, Google Maps, voice-to-text functionality and our social media apps all rely on AI.

In our homes, our smart personal assistants Alexa and Google Dot are also making use of AI.

AI is also having a positive impact on the broader digital community. Automatically generated alternative text is allowing blind people to benefit from visual content online. Translation algorithms mean content can be disseminated in multiple languages without great expense. And AI generated captions ensure audio and video content can be consumed by deaf users.

How can we learn to trust AI

When we are conducting usability testing, one of the things we explore with testers is how a website makes them feel. Do they trust what they are being told and feel happy to continue engaging with it?

Trust issues in digital services are often caused by one of the following:

  • Content does not provide users with the information they need. This leaves them with questions and concerns that may prevent any further engagement with a brand
  • There is a lack of transparency about how a company operates/ uses information. If key information (like how you use data or spend donation money) is not readily available to users, it can increase levels of distrust.

From a consumer point of view, AI is no different. It has great potential to improve human productivity but it must be built in an ethical and transparent way.

And to do that, you need to understand your users.

Firstly, we need to get inside the heads of our users. Like most digital services, AI is built largely by young, white men. Their experience of the world and how they interact with technology will be miles away from the average user. If a programme is built based on their mental model of the world, with all its associated biases, the resulting AI will be irrelevant or even discriminatory towards many users.

Secondly, we need to understand what users’ trade-offs are. How much value they perceive in the offer and what are they prepared to do or share to obtain it? Users may be unwilling to share crucial information, like their location, if they are unsure about how that data is used or why it is necessary to the process.

Discovery research with target users can be used to answer these questions by giving you a clear understanding of user needs, motivations and barriers to engaging. AI can be developed with the right data set and provide users with the information they need to answer their questions.

But user involvement shouldn’t end there. Throughout development the AI interface should be usability tested with target users. This will flag up any issues with the functionality and content and ensure you catch anything that might impact how much your customer trusts you.

“How do we get to AI systems that we can trust? Ultimately, it's going to be through getting our AI systems to interact in the world in a shared context with humans, side-by-side as our assistants. And over time, we’ll develop a sense for how an AI system will operate in different contexts. If they can operate reliably and predictably, we’ll start to trust AI the way we trust humans.” - IBM Chief Scientist for Compliance Solutions Vijay Saraswat

Read more: What is a chatbot?, How can you use a chatbot?

Topics: Chatbots, Machine Learning

Want to know more?

If you’d like to talk to someone about how you can optimise your digital media with user research and advice, please get in touch!

We would love to hear from you.

Contact us

Subscribe Here!

Recent Posts