March 20, 2020
Elena Venieri
In an increasingly digitalized world, user experience as we have written several times on this blog is fundamental.
There are many tools available to design it at its best, but usability tests are the right key to verify the effectiveness and value of the final results.
In this article, we will understand the role of usability test and we will see some tips on how to do them.
Usability tests have been around for about a century. The first person to talk about it was Frederick Taylor in 1911 with a publication on the principles of technical management. However, Taylor was not so interested in usability as in improving working efficiency. It is only with the advent of World War II that we really start talking about usability tests to improve the interfaces of the P-51 Mustangs and helping pilots to quickly respond to changes around them.
Since then, the centrality of the user experience has grown proportionally to the growth of the digital world until it has become fundamental — and with it usability tests — where digital reality has become part of everyday life.
Here the usability problems have become increasingly stringent and visible.
For example, if you choose a bar for breakfast, and once inside it is difficult to express your order due to an unexpectedly complicated menu, the first instinct will not be to go out immediately to find another bar where you can have a coffee easier. On the web, however, the context is quite different.
Probably the user arrived on our site through a search engine.
This means that it has at least nine other results one click away. The ease with which, at the first hitch, you can choose to move to another site is enormous.
Another important difference is that in reality we are immersed in human interaction: back to the example of the bar with a very complicated menu, you can simply ask for a cappuccino in a large cup without reading the menu and the waiter will still understand you.
In everyday life, there are certainly more flexibility and more possibilities to deviate from the behaviour expected by those who thought the service. The world of the web is more stringent, the rules on what a user can do are linear and pre-established. People have to adapt themselves to the digital world and not vice versa, and this already in itself leads the user to be wary. This means that the usability experience must be as pleasant as possible to ensure that the user is satisfied with what has been designed for him and therefore does not leave.
At this point, having ascertained that research on user experience is fundamental in the digital age, the only remaining question is: “Why user tests? Is not the experience and empathy of a good UX designer, combined with Personas made with criteria, sufficient? Why invest resources in usability tests? “ Let’s try to reverse these questions: “Why shouldn’t we do it? Why not have the design of an application (or website) validated directly by the target users for whom we designed the service? Are we afraid to find out that it will be a total failure? “
In reality, there is no better thing than observing a user approaching design for the first time. It doesn’t matter how expert a designer is in the sector in which he finds himself designing. It will never enter into a user’s head. Sometimes we can get stuck in usability tests preparation — and their implementation on a large enough audience to be a representative sample — since as onerous as to discourage their use.
Nielsen instead teaches us that “something is better than nothing”. In the field of usability, it is not necessary to think about complex tests and with a large audience of respondents. Or rather, they are also useful if we work on complex services, but it is difficult to be able to produce more than one on the entire design process. Instead, the user tests must be lean so that we can repeat them on every interaction that leaves us in doubt.
Nielsen highlights the cornerstones of usability tests:
When we are structuring a usability test, we cannot ignore some fundamental stages of its preparation.
First of all, it is essential to define the target and identify 5 people to test the prototype on. They must fall within the defined target or be as likely as possible to the future user of the service. This is to be able to be sure that the results obtained are faithful and credible. To understand if a person falls within our target audience, it is sufficient to submit a limited number of questions that may narrow the field. For example, people who are designers or who habitually lend themselves to usability tests are often excluded because they have too high a knowledge of interfaces. Or we may want to exclude people who already know our brand thoroughly.
Another essential step is to draw up a list of tasks to be submitted to the interviewees, which are centred on the goal we intend to achieve. We must also keep in mind not to write too detailed tasks, so as not to fall into some cognitive bias and influence the user. It is useful to provide enough details to set the problem, without exaggerating. We must then ask ourselves if we expect to collect qualitative data (e.g. easy or difficult to use) or quantitative data (e.g. rating or time spent on a task). A perfect task for a qualitative test probably won’t be for a quantitative test. In the first case, we need open tasks, so that the task to be performed can be freely interpreted. It is true, moreover, that in a qualitative test we can modify the running task if we see that we do not reach the goal we had set ourselves or combine two tasks in one.
On the contrary, in a quantitative test, the tasks must have only one possible interpretation and solution. There can be no ambiguity because each user could perform substantially different activities or follow different paths in the interface. This would result in a collection of metrics not comparable to each other.
There are other definitely interesting steps to make valid and effective usability tests. Among all of them, for example, we mention the facilitator or the a posteriori collection of the collected data, issues that we will address in a future article.
Original article posted on Antreem blog