Biases in User Research and how to avoid them

Share on facebook
Share on twitter
Share on linkedin

When we conduct a user research study there is a chance that we will experience some sort of bias. If this happens, it can affect our research results to the extent of even invalidating the study. Therefore, it is of vital importance to know and avoid all biases in our user research. 

Let us recall that we do user research to help participants reveal true feelings and behaviors without distortions. If there is some sort of inclination or bias during a study, we will not obtain true insights or facts from our participants, but a distorted version of the reality.

So, how do we avoid biased user research?

Let’s start by learning the different types of biases we could experience during our user research and how to avoid them.

Recruitment Biases

Biases can sometimes come from the way we recruit participants for our user research. Recruitment seems like the easiest part of a study, but in fact is one of the most important ones. Accordingly, our screening questionnaire has to be carefully prepared in order not to fall into the following mistakes:

1. Availability bias

Sometimes, the duration of the test, the available slots or the test itself limit participation levels, especially in moderated research.. There is a chance your actual users are not available or are difficult to recruit (i.e. engineers who cannot take time off work or children during school hours). These difficulties must never lead a researcher to lower the recruitment filters in order to meet the desired number of participants. Doing that would affect the results of the study.

Best practice: expand test schedules to evening-night hours and weekends. Also, remote testing can help recruit participants with time or mobility limitations. 

2. Membership bias

These biases in user research occur when participants have specific characteristics that can affect a study’s outcome. This happens, at times, when participants are recruited within employees or subsidiaries of our client. Research sponsor companies tend to choose or recommend those participants that are closer to them, that are more proactive, that have more initiative, etc. because they are usually more open to creative or different work related activities. 

However, these qualities may not be representative. Not all their employees or coworkers are innovators or outspoken people. These characteristics might end up affecting our results. The membership bias is also related to the “Sponsor bias” which is explained further on in this article. 

Best practice: try to keep the screener balanced and randomize participants as much as possible. If possible set quotas for each participant type in your screener (for example: 33% employed full-time, 33% self employed and 33% unemployed).

3. Pay bias

Participants expect to receive some sort of retribution for their time, which is quite fair. However, if money is the sole motivation to participate, it can bias your study. Some participants are willing to cheat in the screener questionnaire or even during the test in order to ensure they receive their incentive, which will affect your results.

This type of bias happens mostly in unmoderated studies like remote usability tests and online surveys. 

Best practice: offer a fair and balanced incentive taking into account the duration and effort your study implies. Also, you should be very rigorous with the screening questionnaire and with the quality of your participants. Control the quality of responses and eliminate those answers that are most likely biased. For example, in a survey you can discard participants that were too fast or who marked the same option in every question. You can also use control questions that automatically eliminate low quality respondents. 

4. Tech-savvy bias

When we conduct a user research study to test a mobile or desktop site or app, it is important that we recruit participants that are able to use these devices in other contexts. Otherwise, they might encounter usability problems with the device and not with the site we are testing. 

On the other hand, it is also important to recruit participants with different tech knowledge levels, including those who are not tech-savvy. These participants do not find the idea of testing a digital product very appealing because of their lack of knowledge about the matter. But only testing with tech-savvy participants, who will most likely overcome potential usability issues for less experienced users, might also bias your results. 

Best practice: ensure you add a screener question about computer and mobile knowledge. Ensure you have a mix and that they all reach a  minimum level of knowledge, if the study requires it. 

Respondent Bias

Participants are the core of user research. They are the ones that will provide the information we will later on use to generate insights and facts. That it why it is important to pay attention to their behavior and be aware of any bias they might be creating during the user research. There are 6 different types:

1. Social Desirability and Acquiescence bias

These two biases are very much related. The first bias refers to those participants who answer what they think you want to hear in order to please the moderator.

The second bias is the tendency of participants to agree and/or positively answer every question they are asked. It is also known as “yea-saying”. These biases make the results of your study less reliable since they do not represent what the user is truly thinking. 

Best practice: when possible try to make the participants justify their answers. Apply the 5 whys technique to try to dig in deeper and get a more sincere response.  Also, phrase open questions to make the participant create their own answer. Last make the participant feel both positive or negative answers are welcome. 

2. Habituation bias

It happens when the participant responds similarly to questions that seem alike (for example if you ask “which of the following brands do you like?” and immediately after ask “which of the following brands do you admire?”) or when the participant seems to have lost interest in the study. If the participant does not put effort to answer accurately, this will affect the research. 

Best Practice: differentiate your question by using different wording and keep the study interesting, in order to engage your participant. Also, control the duration of your study, especially in unmoderated research.

3. Sponsor bias

It occurs when the participant has a built opinion about the brand you are testing or is sponsoring the study. They will answer according to their attitude toward the company, not neutrally. Therefore, affecting the test.

Best Practice: if possible, try to keep the test anonymous. If not, make sure the participant understands that the moderator is not part of the sponsor company and try to avoid going too much off topic from the script.

4. Hawthorne effect

Also known as the observer effect. Participants tend to behave differently when they are aware that they are being observed. They pay more attention to what they are doing and try harder to solve any issues they might encounter.

Sometimes, when a participant is aware that the moderator is taking notes, they drive their own conclusions, which might affect their behavior. When this happens, we are no longer evaluating a “real user” in a “real situation”. 

Best Practice: remind the participant that they should behave as if they were at home. If they would abandon a task or leave a website while on their own, they should do so too during the test.

5. Recency and Primacy Effects

Primacy is the effect in which we tend to recall the first items in a list better than the following ones. While the Recency effect is the tendency to remember the last items better.

These theories affect our user research. For example, if we ask a participant to name all the competitor brands, the participant might only name the most recent companies he or she has seen on TV. 

Best Practice: always alternate the order in which questions, answers and products display in your research. 

Researcher Bias

Last, but not least we will speak about the researcher or moderator biases that can also have an effect on our user research. Here are some tips on how to avoid them.

1. Confirmation bias

It happens when the researcher uses the data to confirm his or her hypothesis. A researcher must never interpret or omit information.

Best practice: use all data to drive conclusions with an open mind and the will to understand the true “why”.

2. Culture bias

Culture bias refers to when the researcher interprets and judges answers or behavior by the standards of the researcher’s own culture. The researcher is not analyzing the data from a neutral starting point, therefore, the results won’t be neutral either. It is very much related to the Halo Effect. These biases occur in moderated studies, most of the times. 

Best Practice: make sure your tasks, script and wording are neutral. Do not expect a specific answer or reaction from participants and do not prejudge or typecast your users.

3. Question-order bias

Participants might feel conditioned to react or answer in a specific way to a question because of the order in which it was placed. This is because initial questions act as context for the following questions, which has a proven direct effect in the results.

Best Practice: carefully prepare your script. It should start with general, open questions that lead to more specific ones. This way, you will not suggest any answers to your participants.

4. Suggestion bias

During a study, especially in usability testing, the moderator might make involuntary suggestions to the participant. These suggestions lead the participants to act or answer in a specific what which may not match their original intentions. 

There are different ways in which a moderator can suggest. By using the same wording as the copy in the user interface, the moderator is clearly indicating what the participant should be looking for. Another way of suggesting is by confirming the participant he/she is doing good. It can either be a verbal or gesture confirmation. Last,  the moderator should never provide any information that the participant is supposed to look for on their own or already know. 

Best Practice: the moderator should thoroughly go over the script of each session and try to avoid unnecessary interventions. Silence is gold!

Conclusion

There are many possible biases that can change the result of our user research. As we have seen, some are more frequent in unmoderated studies, like pay and habituation biases. Others, such as Hawthorne Effect, Social Desirability and availability bias, are most likely seen in moderated research sessions due to the presence of observers and the difficulty of the recruitment process. 

Anyhow, it is our task as researchers to try to avoid them and deliver the most realistic and reliable report. We must pay close attention to the definition of the screening questionnaire and recruitment phase of our projects. Also, as researchers, we have to remain neutral about the participants and the results of the research. 

We are experts in User Research. We test with real users to help you create usable products and great user experiences.

Tell us about your project and discover how we can help you.

+34 910 59 21 36​

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.