Many researchers tend to call qualitative usability testing those studies with small samples, and consider quantitative those that employ larger ones.
However, the two terms are not referring to sample sizes, but to the type of data you are collecting.
Most research methods collect both quantitative (success rates, completion times or satisfaction scores) or qualitative (problems encountered, users’ feedback) data, but that does not mean that they should have the same importance in a usability report.
Qualitative (formative) user testing are aimed at discovering usability problems and are often carried out during the early stages of the development process.
On the contrary, quantitative (summative) user testing evaluate the usability of a system by comparing the results against usability metrics.
Success rates are indeed important since they provide an easy to understand estimation of how easy/difficult a certain task was, but focusing too much percentages may be misleading when you are performing a qualitative study:
- A task may have a low success rate caused by a single and easy to solve issue, while another task may obtain the same success rate due to a number of different and complex problems.
- Success rates in a user test do not necessarily correspond to the ones of a real usage of a webpage.
Samples are generally too low and the margin of error can be quite large (See this tool to estimate the confidence interval for a completion rate).
Moreover, when designing our piece of research, we may want to add some extra difficulties aiming to discover whether certain features present usability issues. - Stakeholders have better tools to quantify the real success rates of their users.
It is not so important for them to know that 60% of the participants managed to book a flight during a formative user test.
They need to know why the other 40% was not able to do it and a good usability test report should prioritize those reasons over success rates.
When you are building your qualitative research report, focus on errors discovered and what caused them, not percentages.
Sure you still have to provide success rates or completion times, but try to replace your stunning charts with a well reasoned analysis.
We are experts in User Research. We test with real users to help you create usable products and great user experiences.