In this Step, you will:
It's important to analyze the data from your usability test and do something with it or the entire exercise is useless. There is a separate module with various methods for analyzing different types of qualitative data, but here is one that is particularly well suited to usability test findings.
First, you should either have someone in each session taking notes or you should be recording the sessions so that somebody else can review what happened with you.
Each observer should have a usability test score card (template included in this guide) and should mark how participants scored on each task. The possible scores are "Task Completed", "Task Completed with Difficulty", and "Task Failed."
As soon as you finish the session, do this quick debriefing exercise with the observers.
Everybody should write down the top five takeaways that they saw during the test. I like to do this on sticky notes. You should include any reasons for participants failing tasks or completing tasks with difficulty. Refer to the score cards to remind yourself which tasks were problematic.
Once you've done this separately, compare notes. This step is to ensure that you and your observer are seeing the same problems. You can discard duplicates so you just have one copy of each problem you've uncovered, but it's nice to note how many observers saw each problem. Again, this is easiest if you write everything on its own sticky note.
Then, stack rank the findings based on importance. If you feel that some problems were much worse than others (and you will), put those at the top of the list.
After your second session, you're going to do the same thing, but you're going to add your findings to the original stack ranking. Some items will move up or down in importance as you conduct more sessions. For example, you may see a problem that looks fairly minor during the first session, but it may be repeated over and over with five different participants. That may move it up in importance.
Once you've collected and ranked your problems, fix them! Then run another set of usability tests to see if the problems went away. Make sure to keep old score cards in order to compare the results.
Why Does This Matter?
It's important to have the data you collect remain persistent across all of your usability studies. This means that you catch problems that come up over and over again. Also, debriefing after each session means that you don't forget problems that only happen in early sessions.