init
Design

Pitfalls to Avoid While Conducting Usability Tests

by Sean Treacy | Aug 4, 2016 | [[ readEstimate ]] min read

Despite all the research there still seems to be a debate around the value of usability testing for web applications. Although not often expressed out loud, the reality is that often the first task to get cut when the budget starts getting tight is usability testing (over the inclusion of extra features, for example). 

Sure didn’t Steve Jobs build the iPhone solely with his own vision? He certainly did, but Jobs is the exception and without usability testing you are more likely to end up being the real life version of Homer Simpson launching the equivalent of “The Homer"

The bottom line is that usability testing done right saves time, money and reputation down the road. Done wrong, it is an expensive chore which creates confusion and uncertainty, and keeps usability testing as an optional phase in the product design process.

There are a bunch of articles circulating the web outlining how to conduct usability tests but I think we should concentrate on the most common mistakes we have encountered as a company, so you can hopefully avoid them in future:

Select a wide variety of testers, and enough of them

In our early days we asked clients to select users and didn’t know too much about them until the test itself. The issue was that we ended up speaking to a lot of friends of the founders or, even worse, fellow tech types who automatically jumped to suggesting solutions versus actually testing the product. Another common tester issue is not selecting enough of them, especially when they are coming from different user groups and have different needs.

There is no doubt that it can be difficult to get the right people to agree to test your product. Offering an incentive helps and is a great way to start your all important “charter customer program,” which has endless benefits (read more about charter customer programs here).

Don’t heed all feedback as gospel

Not every opinion a tester raises is reason for concern or reason to go back to the drawing board. Distilling feedback is half the battle. Additionally, the volume of testers with the same issue/concern/feedback is what matters, so wait for that and don’t panic after the first couple of tests - patterns will soon emerge.

Make sure there is something worth testing

Although advocates for user testing early and often, not everything needs to be tested. If the test is too basic to add value then keep moving forward before testing. Sometimes budgets get bloated with unnecessary testing which holds up progress as it becomes clear after two or three tests that the test is too easy and is simply testing for ‘the sake of it.’ There should be at least three or four specific areas identified that the designer needs feedback on to green light an effective test. If a designer can’t think of anything, perhaps a test isn’t needed yet. 

Practice tests, especially if designs are in flux 

Nothing throws folks off usability testing more than awkward, unprepared tests. It’s difficult getting time in the diary of testers, so try and make the most of the time with them by running a smooth test - as a result they will feel more comfortable and communicative. The only way to achieve this is by practicing the test a number of times internally; if you are changing designs as you go, this becomes even more important.

Leave enough time at the end to probe further

The aim is to let the tester do the talking, so it’s important to leave time at the end of the test to return to clarify certain points with the tester so you aren’t left wondering exactly what they meant by “X Y and Z.” A 5-10 minute conversation slot at the end of a test will help to summarize the test and clarify comments. It comes down to time management and not trying to squeeze too much into the one test.