Blog Engineering Discovering GitLab’s personas
June 8, 2017
4 min read

Discovering GitLab’s personas

Our User Experience (UX) Researcher updates us on the progress of GitLab’s personas

discovering_gitlabs_personas.jpg

Back in January, I explained why GitLab uses personas in product development. At the time, we were still in the process of discovering who GitLab’s personas were. To make sure that the needs and expectations of our users were met, we asked them to complete a survey to share their views with us. Since then, the results have been analyzed resulting in the first iteration of our personas. In this post, I’d like to share more about the survey which contributed to GitLab’s personas.

Survey design

The survey contained a mixture of open-ended and closed-ended questions.

We chose to use open-ended questions as this was the first survey we had produced which aimed to explore users’ motivations and experiences of using GitLab. We wanted to give participants the freedom to answer questions in their own words and avoid leading them towards answers that they wouldn’t have necessarily selected with close-ended questions.

Studies have shown that people tend to focus more on earlier (primacy effect) or later (recency effect) options, with less time spent evaluating middle options. This suggests the order in which we present questions to our users may affect the way they respond to them. For example, for the question of ‘Why do you contribute to open source tools?’, there were 10 possible answers users could select from, ranging from ‘To give back to the community’ to ‘To resolve issues I experience with the tool’. To ensure each answer received an equal opportunity to be selected, the ordering of the answers was shuffled between each user. This way, no option remained in the middle of the list and the risk of an option being overlooked due to its position was reduced. Where possible, other closed-ended questions received the same treatment, reducing bias and ensuring a fair distribution of responses.

In terms of choosing what questions to ask, we asked multiple people working in different teams across GitLab, how they would describe GitLab users. We wanted to be able to test their assumptions, along with our own. Using these assumptions, we formed research questions. Research questions are the goals and objectives of your study, rather than the questions which appear in your survey. They help you to clearly define what it is you want to find out from your survey before you even begin writing it. Once we had our research questions, we wrote the survey to directly address them.

To ensure that we could extract the information we required from the survey questions, we wanted to make sure that every respondent would interpret the questions the way we had intended. We asked colleagues to complete the survey to see if their answers differed from the true intent of the questions. Any ambiguous wording was amended. The survey was then incrementally shared externally with users. This allowed us to further monitor answers, while also checking the survey for bugs (For example, are users able to submit their answers?).

Responses

We were interested in primarily hearing from engaged GitLab users, so the survey was advertised on GitLab’s blog, social media accounts and via the UX webcast. The survey received just over 500 responses over a 50 day period.

Analysis

Surveys are by no means perfect, they only capture the views of people who feel comfortable sharing information in this way. In brief, the users who chose to respond to the survey could be very different from those who chose not to respond, thus creating selection bias.

More than 100,000 organizations and millions of users are using GitLab, therefore a sample size of just over 500 people may seem relatively small. In order to identify users who could be underrepresented, it was important to explore who the respondents of the survey were. By comparing the differences between respondents versus nonrespondents, it was easy to identify where the weaknesses were in the data collected and to determine what needed further research. Equally so, it also highlighted the strengths of the data and what could be reported on with near certainty.

Some of the attributes we compared between respondents and nonrespondents included:

  • Length of time using GitLab
  • GitLab edition (Community vs Enterprise)
  • Size of organization (for users who used GitLab at work)
  • Job role

We also examined demographic and background information, such as age, location, and programming experience/qualifications.

Results

We added the newly-formed personas to GitLab's handbook.

Don’t feel you’re accurately represented? Don’t worry! The personas are very much a work in progress and we will continue to add to them based on further insights revealed from user interviews, usability testing and future surveys.

Want to share your experiences of GitLab with me? Join GitLab First Look and help us build an even better picture of who GitLab’s users really are!

We want to hear from you

Enjoyed reading this blog post or have questions or feedback? Share your thoughts by creating a new topic in the GitLab community forum. Share your feedback

Ready to get started?

See what your team could do with a unified DevSecOps Platform.

Get free trial

New to GitLab and not sure where to start?

Get started guide

Learn about what GitLab can do for your team

Talk to an expert