When you hear a customer problem, it can be tempting to just dive right in and devise a solution. However, it’s important to remember there is never just one good solution to a problem. A problem can be solved in many different ways, depending on what we need to focus on.
Users can also be unpredictable, what we think might solve their pain points, may not actually even begin to address the problems they are facing. Therefore, it’s advisable to test your ideas before you start building a solution. A way in which you can do this is to write and test hypotheses.
A hypothesis is basically an assumption. It’s a statement about what you believe to be true today which can be proven or disproven using research.
A strong hypothesis is usually driven by existing evidence. Ask yourself: Why you believe your assumption to be true? Perhaps your hunch was sparked by a passing conversation with a customer, something you read in a support ticket or issue, or even something you spotted in GitLab’s usage data.
There are lots of different structure for hypotheses, but I recommend using this simple statement:
We believe [doing this] for [these people] will achieve [this outcome].
The statement is comprised of three elements.
The first part:
We believe [doing this] should detail your proposed solution to users’ problems.
The second part:
For [these people] should identify who you are targeting.
The third and final part:
will achieve [this outcome] is where you should document your measure of success. What is your expected result?
storing information about how an incident was resolved, how long it took to resolve and what the outcome was in a historical record for
engineers responsible for incident management will achieve
a 20% faster resolution time for incidents. This is because referring to past incident information helps to inform potential solutions for remediation.
When writing your hypothesis, focus on simple solutions first and keep the scope small. If you’re struggling to articulate your assumptions about users, it’s probably better to start with developing a better understanding of users first, rather than forming weak hypotheses and running aimless research studies.
A strong hypothesis is easy to test. It shouldn’t take you much time to design a research study to validate or invalidate your hypothesis.
If your hypothesis is invalidated by users, don’t feel disheartened. You’ve stopped precious Engineering time being spent on building a solution that simply doesn’t solve users’ problems. A good measure of being iterative is throwing something away because user research proved that it wasn’t going to work. You’re not always going to get things right the first time. We learn more about user needs as a result of testing multiple hypotheses and, in turn, we generate new ideas for future rounds of testing.
What you say and do during a research study is really important to GitLab. We create issues to resolve the problems we witness during a study. To make sure our issues correctly represent what you say and do, we would like to record: (1) the conversation you have with our [researcher/designer/product manager/team] (2) anything you choose to share on your screen with us during the study Please indicate below whether you give your permission to be recorded. * I agree, I give my permission for my voice and screen to be recorded. * I disagree, I do not want my voice or screen to be recorded.
On - Anyone at GitLab with the link.
On - public on the web) but the recording includes either sensitive information or could be defamatory to an individual or organization, please refrain from sharing the video (use the permission of:
On - Anyone at GitLab with the link).
At GitLab, we value transparency. By making information public, we can reduce the threshold to contribution and make collaboration easier. We would love to share the recording of the research study on GitLab. This is completely voluntary and up to you. Please indicate below whether you give your permission for the recording to be shared on GitLab. * I agree, I give my permission for the recording to be shared on GitLab. * I disagree, I do not want the recording to be shared on GitLab.
Below are email/message templates that you can use for communicating with research participants. Whenever possible, we strongly encourage you to customize each message for the individual and circumstance.
Hi! We have a study coming up, and I was wondering if you might be interested in participating with us. Based on your response to our survey, you look like a great fit! Sessions are taking place from `[XX-XX]`, and they last about `[XX]` minutes over Zoom (videoconference). For this round of testing, we’ll be chatting about `[Replace with research subject. Example: What tools you use, what your process is like, etc.]`. Participants who complete a session will be compensated with a `[Replace with compensation. Example: $60 Amazon gift card, or approximate value in your home currency]`. If you are interested, go here `[Link to your Calendly]` and choose one time and day that works for you as soon as possible. There are limited spots available. Please let me know if you have any questions.
Hi all! :wave: We are in the process of `[Replace with research subject]` to `[Replace with research goals for context]`. We need internal customers to answer a few questions. If you would like to help us out, please reply to this survey `[Link to research survey]`. Thank you!
A discussion guide is a set of questions and topics that you would like to discuss with a participant during a user interview. It typically consists of an introduction, warm-up questions, exploratory questions and a debrief. Today, I’m going to walk you through how to create a discussion guide.
Introduce yourself and let the participant know what to expect during the interview. Give them a chance to ask questions. Most people won’t have been interviewed before so take some time to put them at ease. Prior to the interview, you should have already obtained written consent to record and possibly share the conversation you have with the participant. However, it’s a good idea to double check verbally that the participant is still happy to be recorded and for the conversation to be shared.
Start by asking the participant a few easy questions about themselves and their job. This will help the participant get used to the process of answering questions. It’s also an opportunity to begin building rapport with the participant, so that they are more inclined to open up to you when you begin asking exploratory questions. Listen closely, their answers may help provide context for any later responses they give. Some warm-up questions you could ask are:
When you start writing your exploratory questions, you’ll want to group questions into common topics, so that your interview naturally flows. As you begin to structure your questions, allocate time for each topic. This will help keep your interview on track. Move from general questions to more specific questions related to your research goals. For example ‘How do you currently go about this task? to ‘What’s the hardest part about task?’ to ‘What could be better about how you currently do this?’. At the same time, don’t leave your most important questions until the very end in case a user spends more time than you anticipate answering an earlier question.
It’s okay to ask questions about past experiences, as long as you recognize the limitations of people’s memory. The human memory is fallible and it can often be difficult for people to remember specific details. For example, if I asked you whether you had breakfast three days ago? You could probably tell me yes or no. Yet, if I asked you to recall how long your breakfast took to eat, you’d probably struggle to provide an answer or you might even be tempted to hazard a guess. Ask questions which delve into participants’ general experiences and opinions but don’t press participants for details they can’t provide. Otherwise, they may be tempted to make up their answers.
Participants can’t predict the future. If you ask them a question like: 'Would you use this feature?' their response may not be an accurate reflection of what they would actually do. For example, some people might say ‘No’ because they might not be able to visualise how the feature would work from a description alone. Others might say ‘Yes’ because they don’t want to rule out the possibility that at some point in the future the feature might be useful to them.
Thank the participant for their time and explain what happens next with the feedback they have given you today. Give the participant a chance to ask any questions. If you are paying a participant for taking part in your study, ensure you share details of how they will be paid and when they can expect payment. Leave your contact details with them in case they have any follow-up thoughts they want to share with you.
Once you have written your discussion guide, you should rehearse and test out your guide, this can be with a colleague. This will give you a sense of how long your script will take to run through and it will help you spot any questions that people may have difficulty answering.
Remember your discussion guide, is just that, it’s a guide. It’s a reference tool which helps facilitate conversation. If a participant says something interesting, which is not covered by your guide, listen to them and explore what they are saying. You may uncover something you hadn’t previously considered. Active listening is key, you should react to what your participant is saying.
When you conduct an interview, it’s crucial that you are able to build rapport with your participants. People are more likely to talk and let their guard down if they feel relaxed. The quality of the interview and the data you collect will suffer if you are unable to earn a participant’s trust. While it might sound obvious you should: greet participants by their name, smile - since a positive mood is contagious, be friendly and initiate small talk before transitioning into your interview.
Let the participant do most of the talking
You should avoid talking about your own opinions. If you share too much of your own experiences, you risk influencing your participant’s answers. They will be less forthcoming and open if they disagree with your opinion, this may lead them to skew their answers and you’ll end up with inaccurate data.
Silence during interviews is sometimes hard to deal with. As tempting as it is to talk during these awkward moments, it’s actually better if you give participants the opportunity to fill these gaps.
Silence is a natural and important part of user interviews, it allows participants to pause and gather their thoughts. It gives them the sense that you’re waiting for them to say something and it usually encourages them to speak their thoughts out loud. By jumping in and filling that gap, you might interrupt a participant’s thoughts and miss out on a key insight.
Remain neutral while demonstrating empathy
Remaining neutral is something that takes most people a lot of practice. When a participant has experienced a difficult or frustrating situation, our natural instinct is to empathize with them. However, we need to act sympathetically without leading the participant or making assumptions.
For example: Imagine a DevOps Engineer tells you that he or she is responsible for incident management. They’ve had a rough week. They’ve been frequently woken up in the middle of the night to attend to incidents.
As an empathetic human being, your natural reaction may be something like: “That must have been really frustrating for you!” but that would be leading the participant. Instead, you could show some concern by asking the participant to elaborate: “Can you tell me more about that?”.
You could even try a question like “How did that make you feel?” but only if the user hasn’t already indicated how he or she felt. By asking a question that relates to the participants’ feelings, you can show that you are listening and that you empathize with their situation.
Be an attentive listener
Turn off your desktop notifications. Close down the million tabs that you have open and leave your phone in another room. It is crucial that you are not distracted during an interview.
Make the participant feel heard by nodding, looking at them directly through your camera and offering acknowledgments like “Hmm” and “I see”. Always let participants finish their thoughts. Do not interrupt them unless absolutely necessary.
The better we listen, the better data we can gather. Attentive listening is really important because participants take time out of their day to talk to us. It’s just plain good manners to give them our full attention and make them feel like they’re being heard.
Even if you think you know the answer to a question - ask the question anyway. It’s not about what we know, it’s about trying to understand what the participant has to say on the subject. We need to be mindful of our own biases and assumptions and remain curious. Also, don’t assume participants wouldn’t know the answer to a question or will provide a poor response. Ask the question any way and see what they have to say.
Don’t lead users
A common concern that most people have when conducting user interviews is unintentionally leading a participant. If a participant says something that is unclear to you or that you want to follow-up on and you can’t quite find the right words on the spot. A simple technique is simply to repeat back what the participant has said with some intonation.
For example, imagine a participant said:
“The interface isn’t intuitive”
The facilitator could say:
This is especially useful when a participant uses a buzzword like “intuitive”. It’s important to dig into what the participant actually means when they use a word like this. As mentioned earlier, we must be mindful of using our own assumptions to interpret the meaning of “intuitive”. This simple technique encourages participants to continue talking, without unintentionally influencing their response.
How to keep a user interview on track
As a moderator, it’s your job to keep the interview on track. Most participants are thrilled to speak to someone from GitLab and are keen to share their pain points and concerns surrounding the product. However, sometimes participants digress from the topics you want to discuss. Veering off-topic isn’t necessarily a bad thing. It only becomes problematic when it goes on for too long and isn’t satisfying the study’s objectives.
When this happens, politely interrupt the participant and say: “this is really interesting but I’m conscious of the time we have together today. There’s some other things I’d like to cover with you. Why don’t we move on and return to this a little later on”.
Capture consistent data
Let’s say you’ve conducted around 2-3 user interviews and, so far, you feel you haven’t begun to capture the data that you need. It’s very tempting to introduce new questions halfway through a study, but this will make synthesizing your data incredibly difficult.
When we synthesize data, we are looking for patterns in responses, this can be done by making sure we ask the same set of questions to every participant.
Having a certain insight from a single person when the other participants did not get a chance to share their thoughts can create inconclusive results - we don’t know if the insight is only relevant to that one person or whether other participants share the same opinion.
Instead, take a break from interviewing participants, and take some time to review your discussion guide. Remember, you can always reach out to a UX Researcher for advice. If you amend or introduce new questions in your discussion guide, then you will need to restart the process of interviewing participants from the beginning.
Speaking fast and slow
Participants come from a wide range of backgrounds and their experiences can shape the depth of their answers. Some participants will speed through questions while others will take longer to ponder the question before they reply. That’s completely natural.
For participants who speak fast. Talking slowly to them can have a calming effect. It indicates that you are not anxious and that you have the time to listen to them.
For participants who speak slowly. As long as they are giving you good answers, don’t hurry them. Putting pressure on them could mean you lose out on discovering key insights.
And finally, be mindful of the time
Usually, time goes by very fast during interviews. Be respectful of the participant’s time and ensure you end the user interview at the time you have agreed.
The following are examples of checklists that you may want to add to a research issue in order to keep track of what stage the research is up to.
* [ ] Product Manager: Draft the discussion guide. * [ ] Product Designer or UX Researcher: Create the screening survey in Qualtrics. * [ ] Product Designer or UX Researcher: Open a `Recruiting request` issue. Assign it to the relevant Research Coordinator. * [ ] Research Coordinator: Recruit and schedule participants. * [ ] Moderator: Invite the UX Research calendar and any other interested parties to the interviews. * [ ] Moderator: Conduct the interviews. * [ ] Moderator: Open an `Incentives request`. Assign it to the relevant Research Coordinator. * [ ] Research Coordinator: Pay participants. * [ ] Team: Analyze videos. * [ ] Product Manager: Create issues in the UXR_Insights project documenting the findings. * [ ] UX Researcher: Sense check the documented findings. * [ ] UX Researcher: Update the `Problem validation` research issue. Link to findings in the UXR_Insights project. Unmark as `confidential` if applicable. Close issue.
* [ ] Product Manager: Draft the survey. * [ ] UX Researcher: Review the survey and provide feedback. * [ ] Product Designer or UX Researcher: Transfer the survey questions to Qualtrics. * [ ] Product Designer or UX Researcher: Open a `Recruiting request` issue. Assign it to the relevant Research Coordinator. * [ ] Research Coordinator: Distribute the survey to a sample of participants. * [ ] UX Researcher: Review responses received so far. Amend survey if necessary. Advise Research Coordinator to continue recruitment. * [ ] UX Researcher: Notify Research Coordinator of survey closure. * [ ] UX Researcher: Open an `Incentives request`. Assign it to the relevant Research Coordinator. * [ ] Research Coordinator: Pay participants. * [ ] UX Researcher: Analyze the data. * [ ] UX Researcher: Create issues in the UXR_Insights project documenting the findings. * [ ] UX Researcher: Share findings with Product Manager and Product Designer. * [ ] UX Researcher: Update the `Problem validation` research issue. Link to findings in the UXR_Insights project. Unmark as `confidential` if applicable. Close issue.
* [ ] Product Designer: Create a prototype. * [ ] Product Designer or UX Researcher: Create the screening survey in Qualtrics. * [ ] Product Designer or UX Researcher: Open a `Recruiting request` issue. Assign it to the relevant Research Coordinator. * [ ] Product Designer: Draft the usability testing script. * [ ] UX Researcher: Review the usability testing script and provide feedback. * [ ] Product Designer: Invite the UX Research calendar and any other interested parties to the usability testing sessions. * [ ] Product Designer: Conduct one usability testing session. Amend script if necessary. * [ ] Product Designer: Conduct remaining usability testing sessions. * [ ] Product Designer: Open an `Incentives request`. Assign it to the relevant Research Coordinator. * [ ] Research Coordinator: Pay users. * [ ] Team: Analyze videos. * [ ] Product Designer: Create issues in the UXR_Insights project documenting the findings. * [ ] UX Researcher: Sense check the documented findings. * [ ] UX Researcher: Update the `Solution validation` research issue. Link to findings in the UXR_Insights project. Unmark as `confidential` if applicable. Close issue.