Calendar Year 2018 Q3 OKRs

View GitLabs Objective-Key Results for quarter 3 2018. Learn more here!

CEO: Grow Incremental ACV according to plan. IACV at 120% of plan, pipeline for Q4 at 3x IACV minus in quarter orders, LTV/CAC per campaign and type of customer

  • VPE
    • Support: Achieved 94.5% CSAT across all customer tickets for Q3.
      • Self-managed: 95% on CSAT, 95% on Premium SLAs: Achieved 95% CSAT and 83% against Premium SLAs.
      • Services: 95% on CSAT, 95% on Premium SLAs: Achieved 93% CSAT and 94% against SLAs.
  • CMO: Achieve 3x IACV in Q4 pipeline minus in quarter orders
    • Content: Ensure the content team selects their own work and focuses on results. No distractions.
    • Content: Increase sessions on content team blog posts. 5% month over month.
    • Content: Measure and increase pipe-to-spend for content team activities. 10% MoM.
    • Field Marketing: Improve account based marketing for large & strategic accounts. Pull in 1 large or strategic deal into Q3, increase opportunity size of 2 large or strategic deals, 5 intros to new departments at large or strategic accounts.
    • Field Marketing: Increase operational maturity in field marketing tracking and execution. All event lead uploads within 48 hours, all contacts accurate statuses and notes, accurate campaign data for all field marketing campaigns tracked in salesforce.com.
    • Field Marketing: Achieve 100% of field marketing portion of SCLAU plan.
    • Online Growth: Expand online growth tactics. Increase overall traffic to existing pages by 10% compared to last quarter, increase sign-ups from SEO/PPC/Paid Social by 20% compared to last quarter .
    • Online Growth: Improve about.gitlab.com conversion rate optimization. Increase live chat leads 20%, increase contact requests by 20%, increase trial sign-ups by 20%.
    • Marketing Program Management: Launch new email nurture series educating trial requesters on .com Premium plan. Keep unsubscribe rate below 1%, 30% of nurture audience at “engaged” progression status or higher.
    • Marketing Program Management: Increase awareness and ongoing education on GitLab <> Google Partnership. Increase GKE trials referred by GitLab/week by 3X.
    • Marketing Program Management: Improve reporting for marketing programs. Automate and schedule reporting for email & webcast performance, ensure all email and webcast programs are tracked completely in salesforce.com campaigns.
    • Marketing Operations: Streamline lead management processes. Refreshed lead & contact layouts with all unnecessary fields removed from view, trim down to a single lead queue view for SCA team.
    • Marketing Operations: Deploy multi-touch attribution model in Bizible.
    • Marketing Operations: Ensure all online marketing is tracked as campaigns in salesforce.com. Complete tracking of all required campaign fields for all online marketing.
    • SMB Customer Advocacy: Achieve 200% of SMB IACV plan.
    • SMB Customer Advocacy: Improve SMB account management. Increase SMB gross retention to 90%, increase net retention to 150%, reduce number of manual license key replacements by 30%.
  • CMO: Committer Program
    • Increase the number of contributions (not contributors) from the wider community. For the 11.3 release, the target MR is 120 & for the 11.4 release the target MR is 150.
    • Hired full-time GitLab contributor(s) at customers. One hire made at the customer and a blog post about their initiatives published.
    • Implement at least 10 improvement ideas to streamline the contribution process for the wider community: e.g. Improve on-boarding experience
  • CRO
    • Dir, Customer Success:
      • Each SA/TAM team successfully proposes & creates at least one SOW to increase service pipeline. As a result, services bookings increase by 100%.
      • Develop an approach to support growth in strategic accounts. Successfully execute program against one Strategic account per region.
      • Leverage Customer Succes team growth to increase opportunities at the highest potential Strategic growth accounts. Achieve 300% of the Q3 growth iACV plan.

CEO: Popular next generation product. GA for the complete DevOps lifecycle, GitLab.com ready for mission critical applications, graph of DevOps score vs. cycle time.

CEO: Great team. ELO score per interviewer, Real-time dashboard for all Key Results.

  • VPE: 10 iterations to engineering function handbook: 7.5/10 (75%)
    • Infrastructure: 10 iterations to infrastructure department handbook: 10/10 (100%)
      • Production: 10 iterations to production team handbook: X/10 (X%)
    • Ops Backend: 10 iterations to ops backend department handbook: 4/10 (40%)
    • Ops Backend: Define and launch a dashboard with 2 engineering metrics for backend teams => 0%, Throughput and cycle time are the 2 metrics we would like to track. Implementation of the dashboard was not started in Q3.
    • Quality: 10 iterations to quality department handbook: 8.5/11 (77%)
    • Security: 10 iterations to security department handbook: X/10 (X%)
    • Support: 20 iterations to support department handbook: 21/20 (105%)
      • Self-managed: 10 iterations to self-managed team handbook focused on process efficiency: 10/10 (110%)
      • Services: 10 iterations to services team handbook focused on process efficiency: 11/10 (110%)
    • Support: Implement dashboard for key support metrics (SLA, CSAT)
  • VPE: Source 50 candidates for various roles: 50/50 sourced (100%) confidential spreadsheet
    • Frontend: Source 100 candidates by Aug 15 and hire 2 manager and 3 engineers: X sourced (X%), hired X (X%)
      • Frontend Discussion: Source 25 candidates by July 15 and hire 1 engineer: X sourced (X%), hired X (X%)
      • Frontend MD&P: Source 50 candidates by Aug 15 and hire 2 engineers: X sourced (X%), hired X (X%)
    • Dev Backend: Create and apply informative template for team handbook pages across department (70%)
    • Dev Backend: Create well-defined hiring process for ICs and managers documented in handbook, ATS, and GitLab projects (70%)
    • Dev Backend: Source 20 candidates by July 15 and hire 1 Gitaly manager: 20 sourced (100%), hired 1 (50%)
      • Plan: Source 25 candidates by July 15 and hire 1 developer: 50 sourced (100%), hired 0 (0%)
      • Distribution: Source 30 candidates by Aug 15 and hire 1 packaging developer and 1 distribution developer: 30 sourced (100%), hired 0 (0%)
      • Manage: Source 35 candidates by Aug 15 and hire 2: 35 sourced (100%), hired 0 (0%)
      • Create: Source 25 candidates by July 15 and hire 1: 25 sourced (100%), hired 0 (0%)
    • Infrastructure: Source 20 candidates by July 15 and hire 1 SRE manager: X sourced (X%), hired X (X%)
      • Database: Source 50 candidates by July 15 and hire 2 DBEs: 7 sourced (14%), hired 1 (50%)
      • Production: Source 75 candidates by July 15 and hire 3 SREs: 24 sourced (32%), hired 2 (66%)
    • Ops Backend: Source 30 candidates by July 15 and hire monitoring and release managers: 30 sourced (100%), hired 1 (50%) => Hired an engineering manager for the Monitor team.
      • CI/CD: Source 60 candidates by July 15 and hire 2 developers: 15 sourced (25%), hired 0 (0%)
      • Configuration: Source 90 candidates by Aug 15 and hire 3 developers: 90 sourced (100%), hired 0 (0%)
      • Monitoring: Source 90 candidates by Aug 15 and hire 3 developers: 137* sourced (100%*), hired 2 (67%) (We don’t have reliable numbers for “sourced” becuse it was a pooled approach)
      • Secure: Source 75 candidates by Aug 15 and hire 3 developer: ~50 sourced (66%), hired 0 (0%)
      • Serverless: Source 20 candidates by Aug 15 and hire 1 developer: 0 sourced (0%), hired 0 (0%)
    • Quality: Source 150 candidates by Aug 15 and hire 3 test automation engineers: X sourced (60%), hired 2 (78%)
    • Security: Source 150 candidates by Aug 15 and hire 5 security team members: 105 sourced (105%), hired 5 (100%)
    • Support: Source 50 candidates by July 15 and hire an APAC manager: 50 sourced (100%), hired 0 (0%) - pursued top candidate for over a month who ended up signing our offer days after Q3 ended!
      • self-managed: Source 350 candidates by July 30 and hire 7 support engineers: 110 sourced (32%), hired 7 (100%)
      • Services: Source 200 candidates by July 30 and hire 4 agents: 110 sourced (55%), hired 4 (100%)
    • UX: Source 25 candidates by Aug 15 and hire 3 ux designers: 25 sourced (100%), hired 2 (66%)
  • CFO: Improve payroll and payments to team members
    • Controller: Analysis and proposal on Trinet conversion including cost/benefit analysis of making the move (support from peopleops)
    • Controller: Transition expense reporting approval process to payroll and payments lead.
    • Controller: Full documentation of payroll process (sox compliant)
  • CFO: Improve financial performance
    • Fin Ops: Marketing pipeline driven revenue model (need assistance from marketing team)
    • Fin Ops: Recruiting/ Hiring Model: redesign gitlab model so that at least 80% of our hiring can be driven by revenue.
    • Fin Ops: Customer support SLA metric on dashboard
  • CFO: Build a scalable team
    • Legal: Contract management system for non-sales related contracts.
    • Legal: Implement a vendor management process
    • Data and Analytics: Real Time Dashboard for Company critical data. Product Event and User Data Dashboards implemented (signed off by Product Management), Customer Success Dashboard implemented (signed off by Dir. of Customer Success), Marketing Dashboard implemented (signed off by marketing team).
    • Data and Analytics: Increase adoption and usage of the data warehouse and dashboards. Self serve process for generating new events and dashboards documented, Data tests in place for all current and new data pipelines.
    • Data and Analytics: Improve security of corporate data. Every ELT pipeline validates permissions.
  • CCO: Efficient and Effective Hiring.
    • New and Improved ATS chosen and implemented for more efficient and effective hiring (Key result is improved metrics, decreases time in process, minimizing manual efforts for resume review and scheduling)
    • More on-boarding guidance with sessions held for new hires on Monday and Tuesday and recorded for asynchronous value.
    • First iteration of ELO score for interviewers.
  • CCO: Build a strong and scalable foundation within People Ops
    • Select Benefits, 401 (k), Stock options administration, and Payroll providers to bring Payroll and benefits in-house
    • Improve scalability and effectiveness of the 360 process and employee engagement survey process.
  • CCO: Summit Success
    • Complete 2018 Summitt successfully, through a survey to attendees and non-attendees
    • Determine location, changes and improvements for the 2019 Summit

Retrospective

VPE

  • Good
    • We moved to GCP successfully
    • Hiring the infra team went great
    • Handbook was substantially Improved
    • All departments (except development) are on hiring pace
    • Sourcing was done on time
    • Error budgets incentivized GitLab.com availability without slowing the team down
  • Bad
    • We moved to GCP later than anticipated
    • RepManager felt like a roll-of-the-dice during the migration. Should have been rock solid
    • Development teams are behind their needed hiring pace (Backend Rails roles)
    • Can’t yet measure availability simply (one metric)
    • Pages outage during summit was self-inflicted
  • Try
    • Work with PeopleOps/Finance to do something about Rails hiring
    • Rip out RepManager
    • Stick to new Infra roadmap (solves multiple high-priority issues)
    • Continue to steer culture of teams through deliberate promotions/hiring

Frontend

  • Good
  • Bad
  • Try

Dev Backend

  • Good
    • Hired manager for Manage
    • Shipped some team pages and encouraged a lot of thought around how best to “market” teams both internally and externally
    • Hiring process for developers is very thoroughly documented
    • New consistent technical interview process is live and demonstrating great results
    • Team is highly engaged in improving hiring
  • Bad
    • Still only on track to hit 35% of hiring goals by EOY
    • Documentation KRs got largely finished then moved to backburner after summit
    • Still a lack of clarity in the team about the new product categories and which teams are responsible for what aspects of the work, especially technical debt
  • Try
    • Be willing to adjust KRs mid-quarter if priorities shift dramatically (like they did in the wake of our hiring analysis)
    • Set aggressive lag indicator targets for hiring to spur a sea change
    • Communicate more directly and frequently with hiring managers until our pace is more comfortable

Plan

  • Good
    • Working with PM, FE, and UX, issues are much smaller and easier to schedule
    • We got a ‘free’ team member from an internal transfer, who was ready to start working at a high level immediately.
  • Bad
    • Still no hires: last hire was in May
    • Lost six points from error budget very early in the quarter; fixed processes to reduce the human error in that incident
    • Missed two performance improvements (one in review but not merged, one pushed to Q4)
    • Batch commenting was only merged just before the 11.4 freeze, having started work in January
  • Try

Distribution

  • Good
    • Released Charts in GA on time, received positive feedback
    • Prioritised important GitLab.com and customer issues in time
    • Managed to preserve the error budget
    • Sourcing was done on time
    • Encouraged discussion with the whole engineering team over feature flags
  • Bad
    • Team morale dropped after releasing Charts as GA, exhaustion set in after months of pushing towards the big milestones
    • Other work suffered with the focus on Charts
    • Technical debt is on rise across projects
    • Hiring pipeline is of low quality
  • Try
    • Identify the most critical technical debt items and address them
    • Create a better balance between the team owned projects
    • Restart the team trainings effort

Geo

  • Good
    • Completed the GCP migration
      • 5m+ projects and millions of attachments/uploads migrated!
      • Coordination between teams to execute the migration
    • Teamwork to categorize Geo’s backlog to develop the roadmap
    • Good blogpost on how Geo was built
    • Helpful and timely responses from FE and QA teams
  • Bad
    • Significant miss during the migration that HEAD was not replicated
    • Many manual steps in the geo runbooks
    • QA test failures - work needs to be done to make them more effective
  • Try
    • Get back into a rhythm now that GCP migration is concluded
    • Develop a more detailed roadmap and focus milestones to that roadmap
    • Improve Geo’s part of the QA test suite
    • Contribute further technical blog posts

Create/Manage (formerly Platform)

  • Good
    • Hit sourcing goal
  • Bad
    • No one new was hired
    • Platform team being split up into Create and Manage and Manage getting its own Engineering Manager took focus away from planned performance improvements.
  • Try
    • Active sourcing

Gitaly

  • Good
    • 1.0 shipped, NFS turned off for .com
    • Gitaly very stable after 1.0 launch
    • 1.1 almost complete, code duplication between gitlab-ce and gitaly-ruby is removed
    • Plans in place for object deduplication and HA Gitaly
  • Bad
    • Project discipline has atrophied during push to 1.0
    • Still many urgent projects that need to be done “next” with a small team
    • Still being interim managed by Director of Dev Backend
  • Try
    • Source even more aggressively for manager and developer hires
    • Shift team rhythms back to being in sync with GitLab releases, focus on establishing normal project discipline rhythms and processes

Gitter

Infrastructure

  • Good
    • GCP Migration done in August
    • Adding more people to the team - good onboarding in Americas
  • Bad
    • DR implications and analysis with Hurricane Florence
    • SRE team in Europe still on 2 person rotation
    • Small Self inflicted incidents happened with GitLab Pages
  • Try
    • Focus on DR strategy for Next Quarter OKRs
    • Focus on prevention of incidents with iterations on change control processes
    • Focus hiring in EMEA

Database

  • Good
    • Hired 2nd SRE manager with DBRE background
    • No self inflicted DB incidents
  • Bad
    • DBRE hiring moving slower than desired
  • Try
    • More focus on meeting sourcing numbers for DBRE role.

Ops Backend

  • Good
    • Lots of conversations with the team around throughput and the feedback has been positive. We are seeing comments in retrospective around having large MRs and seeing the value of breaking deliverables to smaller MRs that can be done quickly and reviewed within hours instead of days.
    • Hired an Engineering Manager for the Monitor team. Seth joined the team this quarter.
    • The Configure team regained Mayra back full-time, Thong joined the team and Dylan has done a great job in his new engineering manager role.
    • Elliot joined as the Engineering Manager for the Verify and Release teams.
    • Split Verify and Release team members to provide more focus.
    • Ops team pages were updated with more details: https://about.gitlab.com/handbook/engineering/ops/
    • Added a director readme: https://about.gitlab.com/handbook/engineering/ops/director/
  • Bad
    • We are still behind on our hiring. Was not able to hire the second Engineering Manager for Release.
    • Was not able to start on the implemenation of the dashboard for throughput so we can capture data for the team.
    • Throughput OKR was not defined in a measurable way which made it hard to quantify and score.
  • Try
    • Focus on implementation of Throughput and other engineering metrics
    • Try new sourcing methods and work on compensation issues

Monitoring

  • Good
    • Hired 2 developers. While it still misses our goal of hiring 3, we did pretty well.
    • Weren’t able to accurately assess how many candidates were sourced for our positions because of the pooled back-end approach.
    • Error budget was preserved at 100%. This may get harder to pull off as we release more features, increase our surface area, and define SLOs for the services we offer.
    • Onboarded a new manager and a new developer during this time.
  • Bad
    • Completely missed the deliverable for the admin dashboard.
  • Try
    • Determine what SLOs we will be responsible for and implement them.

CI/CD

  • Good
    • Communication around planning improved a lot and involved more of the team
    • Overall adoption of Throughput and smaller MRs
    • Started the process of splitting the team into the Verify and Release teams
    • Steve joined as a Backend Developer
    • Matija joined our team (transitioned from another internal team)
  • Bad
    • Behind on hiring goals
    • Development of features frequently gets slowed down by uncovering tech debt or dependencies
    • Runner and pipeline features are hard for Support Team to support leading to an increase in the number of support requests for the team
    • Some big MRs which were painful to get merged before feature-freeze
  • Try
    • Look for more ways to break MRs into smaller pieces
    • Async retros to allow for allow greater participation
    • Look for new ways to help us plan our releases to make things more predictable and less distracting

Secure

  • Good
    • CodeQuality successfully handover to Verify.
    • We identified friction points in our process, especially with the versioning of our tools. It will be fixed in Q4.
    • Backend and GitLab skills are improving.
    • We have more and more feedback from customers, to help us prioritize features and bugs.
    • Improved a lot communication and planning, by involving BE, FE, and UX earlier in the process.
  • Bad
    • Technical prerequisites for storing data in DB slipped several times. It’s really hard to define a MVP.
    • Could not hire anyone during this quarter. The candidates pool is very low, and sourcing is tedious.
    • Storing vulnerabilities data in the DB was more complex than expected. It’s also coupled with other features from other teams.
    • We have too many domains (five) to cover. Even with CodeQuality gone, we still switch context all the time.
  • Try
    • We have a new recruiter for the Secure Team, and new requirements for the candidates. We will target pure Ruby On Rails developers for the open positions.
    • We should have a maintainer in the team.
    • Put in place the rotation on our Release Manager. We need more people in the team for that.
    • Reduce our technical debt, as we add more members in the team.

Configure

  • Good
    • Managed to have 2 retros across a very wide timezone range [-7 UTC,+13 UTC]
    • Hired a new backend developer (Thong) and a new frontend engineer (Jacques)
    • Rotating daily standup led to people getting to know each other better
    • Got useful feedback about Auto DevOps by working with the support team
    • A lot more collaboration between cross functional teams from discovery through to implementation
    • Managed to ship Auto DevOps enabled by default for all customers finally
    • Our product vision for 2019 looks really exciting and cutting edge
    • Our asynchronous communication has improved in order to face the challenge of our timezone range
    • Made some progress towards RBAC support which is a feature that has been requested for a long time
    • Shipped protected environments which is a first major improvement to support operators
  • Bad
    • Our features still have a very difficult set up process for local development
    • We are not prioritizing our user research findings from Auto DevOps users
    • It seems hard to see the impact of our work due to unknown (but seemingly small) user base
    • Issues still seem to be too large and often take a whole month or slightly more to finish
    • Critical features for Auto DevOps have still not been implemented
    • Not enough hires for backend roles leading to not enough capacity to reach our ambitious goals
    • Missed deliverables occur most months
    • QA test coverage for our products has not improved in some time
    • Auto DevOps on by default has received a great deal of negative feedback from customers
    • Still have not managed to enable Auto DevOps by default on gitlab.com due to our CI infrastructure not being scalable enough
    • Still have not managed to ship RBAC support which is critical for adoption of our Kubernetes integration by most organisations
    • Communication via multiple channels (slack, MR, issue) for the same topic is difficult to follow
  • Try
    • Async retros so that we do not need somebody to stay up very late to attend retro
    • Work more closely with Quality team to improve our test coverage
    • Get closer to our users by regularly working with support and reading zendesk issues from customers
    • Working closely with sourcing to get many candidates that have very specific skills and interest in Ops and Kubernetes to be able to better sell our team
    • Use threaded discussions on GitLab issues to discuss at a higher bandwidth on GitLab
    • Try out Geekbot so we stay in touch even though our rotating standups are only twice a week
    • Beer call once a quarter to get to know each other even better

Quality

  • Good
    • Focus and speciality within the group. Dev Stage dedicated test automation counterparts and Developer Productivity functions have booted up.
    • Setup sub team weekly meetings for collaboration.
    • Implemented triage-package to scale out triage issues to all engineering teams.
  • Bad
    • No assigned resources for ops stage.
    • Its a challenge to navigate and prioritize all the work have since we work in multiple projects.
    • Did not reach hiring goal.
    • As we are adding more tests, suites are taking longer to run.
    • Accidentially committed to 4 OKRs, rather than 3 :)
  • Try
    • Setup project management process and tooling in a common way for all Quality projects.
    • Initiate better long term planning (roadmap) before epic/issue creation.
    • Come up with a simple MVC for test parallelization in a simple MVC manner

Security

  • Good
    • All Q3 Goals were met.
    • We hired 5 Security Engineers in Q3.
    • For S1 security vulnerabilities, our MTTR averages under 30 days, below security industry standards.
    • We have delivered many FGUs, which helps to drive overall awareness of security initiatives at GitLab.
  • Bad
    • Our MTTR for S2 security vulnerabilities needs improvement next.
    • Both critical and regular security release processes could use more automation.
    • Timezone coverage for Security team is still mostly concentrated in US and EU zones.
    • Q3 goals not fully reflecting the breadth of security domain deliverables.
  • Try
    • Hire security engineers in APAC timezone to increase coverage for security incident response.
    • Increase Q4 goals to cover more breadth of Security team initiatives.
    • Use FGUs as a forum to drive accountability throughout GitLab, to improve MTTRs for security vulnerabilities.

Support

  • Good
    • Preparation in advance of Summit to ensure positive customer experience while maximizing support team engagement.
    • Positive progress on exposing key Support metrics into Corporate metrics.
    • Global collaboration on staying on top of our hiring plan.
  • Bad
    • Length of time to source and screen excellent Support Engineering Manager candidates in APAC.
    • Experience of losing our first voluntary termination.
    • SLA’s for self-managed customers continue to be below goal
    • Some high profile customers/propspects had a disruptive experience during the summit
  • Try
    • Establishing a more streamlined candidate to hire process.
    • Clarify expectations for each level of Support Engineering and Support Agent job roles to complement career growth.
    • Ramp up sourcing/hiring in APAC
    • Partner with sales to take especially great care of important customers/prospects prior to renewels and new contracts

Support - Self-Managed

  • Good
    • New Hires continue to make an impact quickly
    • Senior Engineers are focusing on deep performance issues and surfacing problems (Gitaly/NFS).
  • Bad
    • Small Premium customers are starting to generate too many tickets.
    • We haven’t leveraged ticket priorty as much as we should.
    • Our bootcamps have atrophied
    • Knowledge is getting siloed
  • Try
    • Work with Customer Success to improve onboarding for ALL premium customers
    • Shore up our ticket priority workflows
    • Build a process to verify/enhance bootcamps
    • Encourage the team (seniors specifically) to share more in a group setting

Support - Services

  • Good
    • Hit stride in hiring: additional headcount matches volume well.
    • Worked cross-team with Security, Accounts and SMB Team on improving process
  • Bad
    • Ticket volume for .com customers is at a level that a miss severely affects SLA performance
    • GitHost app stopped upgrading customer instances after a bad version was posted on version.gitlab.com
  • Try
    • Revisiting breach notifications to ensure we aren’t missing tickets because of visibility
    • Encouraging agents to do adhoc pairing sessions for learning and reducing the bystander effect

UX

  • Good
    • We hired two excellent UX Designers (Amelia) and soon to be announced!
    • Our hiring pipeline is strong with highly qualified candidates.
    • Despite an increasing number of deliverables per milestone and unexpected UX needs for the 2019 vision, we were able to achieve 100% and 90% respectively on our two OKRs.
    • Our department’s comradery and ability to remain aligned and connected has not been impacted by the company and department’s rapid growth.
    • Designers embedded in cross-functional teams (stable counterparts) has enhanced collaboration and allowed the UX department to dig deeper into existing features.
    • We added a UX Vision to our department handbook, setting the tone and direction for all of our efforts.
  • Bad
    • Design pattern library and design system issues take a long time to review and merge.
    • Our old UX guide is still live as not all of it’s material have been moved to design.gitlab.
    • Design discussions still feel fragmented across multiple channels (issues, MR, slack, sync calls).
  • Try
    • Async retros for the UX department (separate from group retros) to surface shared problems and solutions.
    • Aggressively break down and iterate on design pattern library and design system issues.
    • Archive the old UX guide and make design.gitlab the SSOT for UX standards and guidelines.
    • Investigate ways to make design discussion a first-class citizen in GitLab.

UX Research

  • Good
    • We created 100% of personas that were requested by UX Designers or the Product team.
    • We conducted 62 user interviews which lead to the creation of 6 new personas.
    • Emily von Hoffmann and Andy Volpe supported UX Research considerably by conducting and analysing user interviews. We couldn’t have achieved this OKR without their help.
    • Product Marketing were very supportive of our efforts. They actively participated in Key Reviews and helped us shape the personas’ format and content.
    • Despite the disappointing low response rate to the survey, the data we collected provided insight into who qualifies as a churned GitLab.com user and how users first interact with GitLab. We also managed to triangulate the data with user interviews and provide Product with a provisional list of pain points for further exploration.
  • Bad
    • In order to identify 5 pain points for users who have left GitLab.com, we created a survey to send to churned users. We distributed the survey to 8000+ churned GitLab.com users whose details were supplied to us by Product. Unfortunately, the survey only received 126 partial responses. Of those responses, 33% of users confirmed that they were in fact still using GitLab. The recipient list was inadequate for the purposes of our research. This isn’t something Product could have foreseen.
  • Try
    • Closer collaboration with the Product team when creating OKRs.