A Common Screener uses the same questions to recruit across multiple studies. This can be done when teams that would recruit on their own are using the same types of background questions to identify their participants (for example, common tasks, industry, company size).
Each team member who uses the Common Screener for a specific study selects their own inclusion criteria for each question. For example, one study might need users from companies with less than 100 employees, while another study might need users from companies with more than 1,000 employees (see Figure 1). Each study would set different inclusion criteria for the same, common, question about company size.
The above figure illustrates how the same questions can be used to match participants to various studies using the Common Screener approach. In this example, a participant from a company with more than 100 people might be screened out of the study if a single screener is used for the study, but may be matched to another study when the Common Screener approach is used. This is possible because each study sets different inclusion criteria for the same question about company size. In this example, the Common Screener approach also allows us to match participants who align with different Personas based on a JTBD question that asks them to select their key tasks.
Common Screeners leverage Qualtrics functionality to find matches between study inclusion criteria and participant profiles, as framed by their responses to common questions. As in the example, we might include a question with a list of JTBD tasks from our handbook to identify the Persona for each respondent. Then, we'll use their response to match them with studies that are looking for that Persona. We can use that in combination with a question about company size to match respondents with different studies that are looking for the same Persona from different company sizes and/or different Personas from the same company size.
These kinds of screeners are best suited for studies that are being planned early and have similar questions across participants, like company size.
Common Screeners do the following:
The Verify and Package teams worked on their first GitLab iteration of a Common Screener in early 2022. We were able to use the same screener to recruit across 5 Problem Validation studies (more here). We are currently utilizing this process to see if we’re able to use the same screener to support recruitment for both Problem and Solution Validation studies across the second half of 2022 (more here).
Our pilot focuses on User Personas across Verify and Package teams:
Our common screener uses questions about a few key areas to help us match respondents with research studies, including:
A question about key tasks is a good example of something that can be used in a Common Screener, because different answers to the same question can help us match respondents to different studies.
Asking respondents to select their common tasks from a list that covers multiple Personas also minimizes the response bias introduced by respondents who select tasks that they don’t actually complete in the hopes of being included in a research study. Below is an example of what a task-based question might look like:
Which of the following are part of your primary job responsibilities? Select all that apply.
- Lead the design of an effective, empathetic, and efficient user experience
- Translate product designs into application code
- Deploy, build, and release code
- Write application code to implement features and bug fixes
- Maintain and scale infrastructure and configurations
- Work with teams to implement security fixes and/or run security tests
- Run and test pipeline builds
- Coordinate and orchestrate releases
- Build and implement tools to enhance security
Table 1. Lists tasks from the handbook page that help us differentiate which respondents map to each User Persona included in our pilot.
User Persona | Differentiating Task |
---|---|
Presley, Product Designer | Lead the design of an effective, empathetic and efficient user experience |
Sasha, Software Developer | Translate product designs into code |
Devon, DevOps Engineer | Deploy, build and release code; Provide pipeline definitions and CI templates; Use code to implement features and bug fixes |
Sidney, Systems Administrator | Maintain and scale infrastructure and configurations; Build servers deploy to them and/or help developers to do so |
Sam, Security Analyst | Work with teams to implement security fixes; Run security tests and/or flag potential security issue |
Rachel, Release Manager | Run and test pipeline builds; Automate pipelines; Coordinate teams across releases |
Alex, Security Operations Engineer | Address security incidents; Build and implement tools to enhance security |
Yes, use of a Common Screener approach requires that:
Here are the steps for PMs and Designers to take if you’d like to set up a new Common Screener:
Once the Common Screener is set up, there are a few more steps to follow.
Common Screener | Types of Studies |
---|---|
Benchmark Loop Stages Common Screener | 60, 90 or 120 min Zoom sessions or moderated usability studies |
2023 CI/CD Solution Validation Studies | Surveys, 20 min online unmoderated studies, 30 or 60 min interviews or moderated usability sessions |
Problem Validation + Foundational Research 2023 | 30 or 60 min Zoom interviews, 60 min interviews, 30 or 60 min task based moderated usability studies |