How to create effective screener surveys to recruit participants

June 7, 2024

Screening for the right participant profile in user research is crucial for collecting data that is insightful and relevant to the research objectives. A common mistake in the screening process is generalizing the participant profile with demographic information that has little to no meaning and failing to capture any depth. Under the rationale that your product appeals to a wide audience, you may want to collect data from a diverse set of participants without focusing on specific key characteristics. However, research findings from a broadly defined profile will be refined to surface-level observations, lacking the essential depth for actionable research insights.

A well-written screener survey enables researchers to easily identify and select participants that align best with the study objectives, ensuring that participants possess the key characteristics and behaviors of the target audience. No matter how well-structured or detailed a research plan is, research data will not be relevant if you are speaking to the wrong participant profile. Thus, crafting a well-formulated screener survey acts as a gateway to glean meaningful and actionable insights that contribute to the success of the overall research. In the post, we provide 11 tips and suggestions for creating an effective screener survey.

5 Approaches to keep in mind
  1. Get to know the participants
  2. Be transparent without giving away too much study information
  3. Focus on behaviors instead of demographics
  4. Screener ≠ Survey. Keep it concise.
  5. Manually review participants one-by-one
Improving the screener flow
  1. Keep important filtering questions early on
  2. Save essential open-ended questions for the end
  3. Use skip logic as needed
Framing questions effectively
  1. Add red herring (trick) questions
  2. Avoid leading questions
  3. Avoid binary yes-no questions

5 Approaches to keep in mind

1. Get to know the participants

You want each session to be as successful as possible by recruiting quality participants that are best fit for your research objectives. Including questions that probe their motivations, tool usage, behaviors, and preferences, allow you to better understand your participant profile. Based on their responses, adding branching logic can be helpful to delve a layer deep into their responses. Even though the quality of data collected from screener surveys are limited in length and quality, use them as a first pass to examine participant information. Below are a few basic examples that are worth exploring:

  • Probing what tools they use:
    Which of the following tools have you used in the past 6 months? [Multi-select]
  • Understanding their motivations at a high level:
    Which of the following best describes the purpose of using the product? [Multiple choice]
  • Exploring potential problem areas:
    If any, what challenges or pain points have you experienced using the product? [Open-text]

The example questions are generic, and yet relevant enough to be added to most of the screener surveys. Open-ended questions can be used to gauge the overall quality of participants by examining the depth and length of their responses. The responses can be later used to connect the dots with the full data.

If your study is moderated, use the screener information to start the conversation and set the direction of the study. Referring to their screener responses will show that you care about the details that they provided and that you’ve done your research ahead of time.  

2. Be transparent without giving away too much study information

Clearly communicate the purpose of the research and how participant data will be used. Transparency builds trust and increases confidence and likeliness of honest responses from participants. However, also keep in mind that giving away too much information about the study can unknowingly bias the participants or allow less fitting participants to pass the screen test based on the information provided.

Because a study description will not filter any participants, don’t give away key details in the description. A description claiming “we’re looking for iOS developers with experience in Swift” won’t deter non-iOS developers to take the survey.

Instead, add questions that survey their job titles and experience in programming language as below.

What is your job title or role? [Short-text]

Which of the following programming languages do you have experience in? [Multi-select]

  • Python [May Select]
  • Java [May Select]
  • C++ [May Select]
  • Swift [Must Select]

Instead of explicitly mentioning iOS development as a requirement, the description can be vague as app development to hide such key requirements. This balance is crucial—enough disclosure to pique interest and commitment, but not so much that it shapes their answers.

3. Focus on behaviors instead of demographics

Demographics often don’t tell much about the participants. Unless demographic information, such as age, geography, and education level, are critical to your research objectives, filtering participants based on specific demographics will only limit the scope of the study. Instead, focus on participant behaviors. Have questions that aim to understand their motivations, pain points, what tools they use, how their workflow and daily task looks like.

Let’s say you run a research to increase the adoption rate of student developers for a software development kit. There is initially a few factors to consider: age, major, grade level, and so on. What are more important than age or major are their years of programming experience, type of programming languages and tool sets that they use. Thus, recognizing these behavioral factors are more informative and relevant than mere demographic information. Consider why having certain characteristics are important to the study, and how they would affect the outcome of the study.

🖌️ Quick Tip

To learn more about characterizing your participant profile, we recommend our article on understanding the different profiles for B2C and B2B.

4. Screener ≠ Survey. Keep it concise.

Keep the screener concise to the point. The purpose of the screener is finding the best-fitting participants for your research needs. While it is important to get as much needed information to determine whether participants are good fit for the study, long surveys that require elaboration discourage participants from completing. Aim for a balance between gathering necessary information and respecting participants' time. Imagine you are given 3 minutes to ask as many questions possible to determine whether a participant is a go or a no-go. You should be only asking questions that are relevant.

Even though open-text field questions help gauge qualitative comments from participants, limit the number of open-text field questions to at most three. The more you require open-text field questions, the more likely for participants to give up the survey. Keep in mind that you are trying to get an early signal of what factors instead of getting deep into the whys and hows.

5. Manually review the participants one-by-one

Especially for moderated studies that involve more commitment from the product team, you want every session to be as engaging and insightful as possible. To schedule high quality participants, sometimes it is helpful to review the candidate list manually even though many research platforms offer automated scheduling and invitation. Reading through the open-text responses typically serves as a good indicator of how much participants are eager or relevant to your study topic. Reviewing open-text field data could help in some of the following ways:

  • Detecting anomaly, or mismatch to previous questions’ responses.
  • Take note of comments that interest you. Make sure to follow-up during the moderated session.



Improving the screener flow

Like any study, screeners also require a well-thought out survey flow. Below are three quick tips on making your screener compact and deliberate.

1. Keep important filtering questions early on

Time is valuable for both you and the participants. The least you want is to have participants answer all the nitty-gritty demographics questions only to get rejected by the next key screening question. Have the essential screener questions early on. These questions should directly relate to the characteristics or experiences you are targeting so that you can filter our less relevant responses. General and demographic information can come later.

2. Save essential open-ended questions for the end

Leave open text field questions at the end—to learn more about the participants. These questions serve as hints to assess their fit, commitment and eagerness to participant in your study. Moreover, these questions can set expectations about the research by introducing the type of information that you are looking for. Typically, the longer and more elaborate the responses, the more likely the participants will be high quality.

These open-ended questions are double-edged sword, which can either shed more insights about participants, or discourage participants from completing the survey. Below are some relevant, generic open-ended questions that could be tailored to your research objectives: An easy example that’s useful is probing for their current overall experience and pain points using a particular tool or jobs-to-be-done.

  • How is the overall experience using the product?
  • Why do you use product X over Y?
  • If any, what pain points have you experienced using the product?
  • If any, what changes can we make to improve the experience?

3. Use skip logic as needed

Implement skip logic to tailor the survey based on participants' responses. Skip logics not only streamlines the survey process but ensures that participants engage only with queries relevant to their profile. Displaying minimal number of questions will help avoid participants’ fatigue, and encourage them to stay focused until the end. When a participant doesn’t meet the study criteria, they can be directed to end the survey.

Framing questions effectively

1. Add red herring (trick) questions

There may be participants that attempt to fake their way into studies that involve incentives. To ensure the quality of participant panel, most user research softwares take various measures, such as email and phone verification, and feedback loop to report participants. Nonetheless, having red herring questions in your screener will add an extra layer of safeguard against poor-fitting participants making through the screener. Moreover, if your participant profile involves having specific knowledge and skill, it is recommended to add in a few red herring options in screener questions like the example below.

  • Which of the following user research tools have you used in the last 6 months? [Multi-select]
    • Hubble [Must select]
    • User Interviews [May select]
    • UserTesting [May select]
    • User Guide [Reject, Red herring]

Like the example above, include a fictitious option that appears plausible. Selecting the red herring option will indicate that the participant is not too familiar with the  topic.

2. Avoid leading questions

Leading questions can prime participants to provide a desired answer. Whether intended or not, you may have framed a question in a leading way that could bias the participant response. An easy way to avoid leading questions is to frame the questions beginning with how.

Also, avoid using strong emotive words in the questions. While you can make the screener exciting and fun, it could often lead to questions that appear leading. One survey question that we’ve previously seen was:

  • How amazing is our product?
  • What three fun adjectives best describe product X?

In the two examples above, amazing and fun are emotive words that imply that the product is already amazing, fun, and awesome. As a lesser exciting variant, the questions can be re-written by removing those emotive words:

  • How is our product? How is the overall experience of the product?
  • What three adjectives best describe product X?

3. Avoid asking binary yes-no questions

Binary yes-no questions don’t tell much. A common mistake we see is screeners filled with sequence of yes-no question to filter the participants. An example is below:

Question: Do you use product Hubble?

When you want to identify whether participants use a certain product, don’t simply ask a yes-no question. Instead, replace the options that are more telling than yes-no options. Related to the previous point, focus on learning participants’ behaviors. The questions can be re-written as below:

Alternative 1:

  • Which of the following user research tools have you used in the last 6 months? [Multi-select]
    • Hubble [Must select]
    • UserInterviews [May select]
    • UserTesting [May select]
    • User Guide [Reject, Red herring]

Here, it gives an option of various research tools along with a red herring question. Also, specifying in the last 6 months serves as an additional layer of context. You can then expand the question to:

Alternative 2:

  • How frequently do you use Hubble for research? [Select one]
    • A few times a week [Accept]
    • Once a week [Accept]
    • Once a month [Accept]
    • Never [Reject]

The second version still probes into participants’ usage of research software tools. Instead of just identifying whether a participant uses Hubble or not, the second question uncovers how often they engage with the product, which is much informative. Thus, there are various ways to ask questions in non-binary format.

We’ve looked at several ways to improve the screener survey for effectively selecting high quality participants for the success of your next research. In order to build an effective screener survey, every question needs to be intentional and purposeful. Crafting an effective screener survey is an essential step in ensuring the success of your research. By following these tips and suggestions, you can create a survey that efficiently identifies participants who align with your research objectives, ultimately providing valuable insights for your product or service development.

Frequently Asked Questions

How are screener surveys different from surveys?

Screener surveys are often used as a preliminary step in the recruiting process to filter participants based on criteria for targeted studies, ensuring a specific participant pool. Regular surveys collect broader feedback on opinions and experiences from participants.

How long should screener surveys be?

Screener surveys should be concise and focused, typically taking no more than 5-10 minutes for participants to complete. Keeping them brief ensures higher completion rates and encourages accurate responses from potential participants.

Why do you need a screener survey?

A screener survey is essential for pre-qualifying participants based on specific criteria, ensuring that the right individuals are selected for targeted studies. This helps in obtaining relevant and meaningful insights, improving the effectiveness of the overall usability testing and research efforts.

How many questions should I include in a screener?

Keeping the screener survey concise is important for qualifying participants to actually complete the screener. Even though there is no limit to the number of questions you can ask in a screener, the screener should take no longer than 5 to 10 minutes. In order to make the screener relevant, place crucial screening questions early on in the survey.

Read other articles
Jin is a UX researcher at Hubble that helps customers collect user research insights. Jin also helps the Hubble marketing team create content related to continuous discovery. Before Hubble, Jin worked at Microsoft as a UX researcher. He graduated with a B.S. in Psychology from U.C. Berkekley and an M.S in Human Computer Interaction from University of Washington.

Related posts

Usability Testing Metrics: The Ultimate Guide to Quantifying User Experience

Usability Testing Metrics: The Ultimate Guide to Quantifying User Experience

How to Analyze Usability Testing Results

How to Analyze Usability Testing Results

10 Effective Usability Testing Methods to Improve User Experience

10 Effective Usability Testing Methods to Improve User Experience