CEO: Grow Incremental ACV according to plan. IACV at 120% of plan, pipeline for Q4 at 3x IACV minus in quarter orders, LTV/CAC per campaign and type of customer.
Support: Achieved 94.5% CSAT across all customer tickets for Q3.
Self-managed: 95% on CSAT, 95% on Premium SLAs: Achieved 95% CSAT and 83% against Premium SLAs.
Services: 95% on CSAT, 95% on Gold SLAs: Achieved 93% CSAT and 94% against SLAs.
CMO: Achieve 3x IACV in Q4 pipeline minus in quarter orders
Content: Ensure the content team selects their own work and focuses on results. No distractions.
Content: Increase sessions on content team blog posts. 5% month over month.
Content: Measure and increase pipe-to-spend for content team activities. 10% MoM.
Field Marketing: Improve account based marketing for large & strategic accounts. Pull in 1 large or strategic deal into Q3, increase opportunity size of 2 large or strategic deals, 5 intros to new departments at large or strategic accounts.
Field Marketing: Increase operational maturity in field marketing tracking and execution. All event lead uploads within 48 hours, all contacts accurate statuses and notes, accurate campaign data for all field marketing campaigns tracked in salesforce.com.
Field Marketing: Achieve 100% of field marketing portion of SCLAU plan.
Online Growth: Expand online growth tactics. Increase overall traffic to existing pages by 10% compared to last quarter, increase sign-ups from SEO/PPC/Paid Social by 20% compared to last quarter .
Online Growth: Improve about.gitlab.com conversion rate optimization. Increase live chat leads 20%, increase contact requests by 20%, increase trial sign-ups by 20%.
Marketing Program Management: Launch new email nurture series educating trial requesters on .com Gold plan. Keep unsubscribe rate below 1%, 30% of nurture audience at "engaged" progression status or higher.
Marketing Program Management: Increase awareness and ongoing education on Gitlab <> Google Partnership. Increase GKE trials referred by Gitlab/week by 3X.
Marketing Program Management: Improve reporting for marketing programs. Automate and schedule reporting for email & webcast performance, ensure all email and webcast programs are tracked completely in salesforce.com campaigns.
Marketing Operations: Streamline lead management processes. Refreshed lead & contact layouts with all unnecessary fields removed from view, trim down to a single lead queue view for SCA team.
Marketing Operations: Deploy multi-touch attribution model in Bizible.
Marketing Operations: Ensure all online marketing is tracked as campaigns in salesforce.com. Complete tracking of all required campaign fields for all online marketing.
SMB Customer Advocacy: Achieve 200% of SMB IACV plan.
SMB Customer Advocacy: Improve SMB account management. Increase SMB gross retention to 90%, increase net retention to 150%, reduce number of manual license key replacements by 30%.
CMO: Committer Program
Increase the number of contributions (not contributors) from the wider community. For the 11.3 release, the target MR is 120 & for the 11.4 release the target MR is 150.
Hired full-time GitLab contributor(s) at customers. One hire made at the customer and a blog post about their initiatives published.
CMO: Increase brand awareness and preference for GitLab.
Corporate marketing: Help our customers evangelize GitLab. 3 customer or user talk submissions accepted for DevOps related events.
Corporate marketing: Drive brand consistency across all events. Company-level messaging and positioning incorporated into pre-event training, company-level messaging and positioning reflected in all event collateral and signage.
Corporate marketing: Increase share of voice. 20% lift in web and twitter traffic during event activity.
Product marketing: Increase inbound leads - Create 20 new web pages to educate market and prospects about GitLab capabilities and solutions
Product marketing: Deliver 18 enablement sessions to sales and channel teams to increase SCLAU by count of 15
Product marketing: Launch Customer Advisory Board with 15 key strategic GitLab account customers to create 15 customer champions (references) to help close SCLAU opportunities
Alliances: Get listed and offered as partner of choice on large cloud providers. 3 public clouds.
Alliances: Develop an opportunity for migration with a large OSS community. Sign letter of intent.
Alliances: Secure keynotes at a big cloud conferences for brand building and sales momentum. 2 keynotes.
Hire two directors
DevOps score vs. cycle time first two iterations shipped. Prove relationship between DevOps score and releases per year
Hire a growth team
Make 10 key feature docs more enticing, replacing need for their about.gitlab feature pages 5/10 (50%)
CEO: Great team. ELO score per interviewer, Real-time dashboard for all Key Results.
VPE: 10 iterations to engineering function handbook: 7.5/10 (75%)
Infrastructure: 10 iterations to infrastructure department handbook: X/10 (X%)
Production: 10 iterations to production team handbook: X/10 (X%)
Ops Backend: 10 iterations to ops backend department handbook: 4/10 (40%)
Ops Backend: Define and launch a dashboard with 2 engineering metrics for backend teams => 0%, Throughput and cycle time are the 2 metrics we would like to track. Implementation of the dashboard was not started in Q3.
Quality: 10 iterations to quality department handbook: 8.5/11 (77%)
Security: 10 iterations to security department handbook: X/10 (X%)
Support: 20 iterations to support department handbook: 21/20 (105%)
Self-managed: 10 iterations to self-managed team handbook focused on process efficiency: 10/10 (110%)
Services: 10 iterations to services team handbook focused on process efficiency: 11/10 (110%)
Support: Implement dashboard for key support metrics (SLA, CSAT)
Frontend: Source 100 candidates by Aug 15 and hire 2 manager and 3 engineers: X sourced (X%), hired X (X%)
Frontend Discussion: Source 25 candidates by July 15 and hire 1 engineer: X sourced (X%), hired X (X%)
Frontend MD&P: Source 50 candidates by Aug 15 and hire 2 engineers: X sourced (X%), hired X (X%)
Dev Backend: Create and apply informative template for team handbook pages across department (70%)
Dev Backend: Create well-defined hiring process for ICs and managers documented in handbook, ATS, and GitLab projects (70%)
Dev Backend: Source 20 candidates by July 15 and hire 1 Gitaly manager: 20 sourced (100%), hired 1 (50%)
Plan: Source 25 candidates by July 15 and hire 1 developer: 50 sourced (100%), hired 0 (0%)
Distribution: Source 30 candidates by Aug 15 and hire 1 packaging developer and 1 distribution developer: 30 sourced (100%), hired 0 (0%)
Manage: Source 35 candidates by Aug 15 and hire 2: 35 sourced (100%), hired 0 (0%)
Create: Source 25 candidates by July 15 and hire 1: 25 sourced (100%), hired 0 (0%)
Infrastructure: Source 20 candidates by July 15 and hire 1 SRE manager: X sourced (X%), hired X (X%)
Database: Source 50 candidates by July 15 and hire 2 DBEs: X sourced (X%), hired X (X%)
Production: Source 75 candidates by July 15 and hire 3 SREs: X sourced (X%), hired X (X%)
Ops Backend: Source 30 candidates by July 15 and hire monitoring and release managers: 30 sourced (100%), hired 1 (50%) => Hired an engineering manager for the Monitor team.
CI/CD: Source 60 candidates by July 15 and hire 2 developers: X sourced (X%), hired X (X%)
Configuration: Source 90 candidates by Aug 15 and hire 3 developers: 90 sourced (100%), hired 0 (0%)
Monitoring: Source 90 candidates by Aug 15 and hire 3 developers: 137* sourced (100%*), hired 2 (67%) (We don't have reliable numbers for "sourced" becuse it was a pooled approach)
Secure: Source 75 candidates by Aug 15 and hire 3 developer: ~50 sourced (66%), hired 0 (0%)
Serverless: Source 20 candidates by Aug 15 and hire 1 developer: 0 sourced (0%), hired 0 (0%)
Quality: Source 150 candidates by Aug 15 and hire 3 test automation engineers: X sourced (60%), hired 2 (78%)
Security: Source 150 candidates by Aug 15 and hire 5 security team members: 105 sourced (105%), hired 5 (100%)
Support: Source 50 candidates by July 15 and hire an APAC manager: 50 sourced (100%), hired 0 (0%) - pursued top candidate for over a month who ended up signing our offer days after Q3 ended!
Self-hosted: Source 350 candidates by July 30 and hire 7 support engineers: 110 sourced (32%), hired 7 (100%)
Services: Source 200 candidates by July 30 and hire 4 agents: 110 sourced (55%), hired 4 (100%)
UX: Source 25 candidates by Aug 15 and hire 3 ux designers: 25 sourced (100%), hired 2 (66%)
CFO: Improve payroll and payments to team members
Controller: Analysis and proposal on Trinet conversion including cost/benefit analysis of making the move (support from peopleops)
Controller: Transition expense reporting approval process to payroll and payments lead.
Controller: Full documentation of payroll process (sox compliant)
CFO: Improve financial performance
Fin Ops: Marketing pipeline driven revenue model (need assistance from marketing team)
Fin Ops: Recruiting / Hiring Model: redesign gitlab model so that at least 80% of our hiring can be driven by revenue.
Fin Ops: Customer support SLA metric on dashboard
CFO: Build a scalable team
Legal: Contract management system for non-sales related contracts.
Legal: Implement a vendor management process
Data and Analytics: Real Time Dashboard for Company critical data. Product Event and User Data Dashboards implemented (signed off by Product Management), Customer Success Dashboard implemented (signed off by Dir. of Customer Success), Marketing Dashboard implemented (signed off by marketing team).
Data and Analytics: Increase adoption and usage of the data warehouse and dashboards. Self serve process for generating new events and dashboards documented, Data tests in place for all current and new data pipelines.
Data and Analytics: Improve security of corporate data. Every ELT pipeline validates permissions.
CCO: Efficient and Effective Hiring.
New and Improved ATS chosen and implemented for more efficient and effective hiring (Key result is improved metrics, decreases time in process, minimizing manual efforts for resume review and scheduling)
More on-boarding guidance with sessions held for new hires on Monday and Tuesday and recorded for asynchronous value.
First iteration of ELO score for interviewers.
CCO: Build a strong and scalable foundation within People Ops
Select Benefits, 401 (k), Stock options administration, and Payroll providers to bring Payroll and benefits in-house
Improve scalability and effectiveness of the 360 process and employee engagement survey process.
CCO: Summit Success
Complete 2018 Summitt successfully, through a survey to attendees and non-attendees
Determine location, changes and improvements for the 2019 Summit
We moved to GCP successfully
Hiring the infra team went great
Handbook was substantially Improved
All departments (except development) are on hiring pace
Sourcing was done on time
Error budgets incentivized GitLab.com availability without slowing the team down
We moved to GCP later than anticipated
RepManager felt like a roll-of-the-dice during the migration. Should have been rock solid
Development teams are behind their needed hiring pace (Backend Rails roles)
Look at PagerDuty alerts and clean up anything that didn't actually matter
Lots of conversations with the team around throughput and the feedback has been positive. We are seeing comments in retrospective around having large MRs and seeing the value of breaking deliverables to smaller MRs that can be done quickly and reviewed within hours instead of days.
Hired an Engineering Manager for the Monitor team. Seth joined the team this quarter.
The Configure team regained Mayra back full-time, Thong joined the team and Dylan has done a great job in his new engineering manager role.
Elliot joined as the Engineering Manager for the Verify and Release teams.
Split Verify and Release team members to provide more focus.
Ops team pages were updated with more details: https://about.gitlab.com/handbook/engineering/ops-backend/
Added a director readme: https://about.gitlab.com/handbook/engineering/ops-backend/director/
We are still behind on our hiring. Was not able to hire the second Engineering Manager for Release.
Was not able to start on the implemenation of the dashboard for throughput so we can capture data for the team.
Throughput OKR was not defined in a measurable way which made it hard to quantify and score.
Focus on implementation of Throughput and other engineering metrics
Try new sourcing methods and work on compensation issues
Hired 2 developers. While it still misses our goal of hiring 3, we did pretty well.
Weren't able to accurately assess how many candidates were sourced for our positions because of the pooled back-end approach.
Error budget was preserved at 100%. This may get harder to pull off as we release more features, increase our surface area, and define SLOs for the services we offer.
Onboarded a new manager and a new developer during this time.
Completely missed the deliverable for the admin dashboard.
Determine what SLOs we will be responsible for and implement them.
CodeQuality successfully handover to Verify.
We identified friction points in our process, especially with the versioning of our tools. It will be fixed in Q4.
Backend and GitLab skills are improving.
We have more and more feedback from customers, to help us prioritize features and bugs.
Improved a lot communication and planning, by involving BE, FE, and UX earlier in the process.
Technical prerequisites for storing data in DB slipped several times. It's really hard to define a MVP.
Could not hire anyone during this quarter. The candidates pool is very low, and sourcing is tedious.
Storing vulnerabilities data in the DB was more complex than expected. It's also coupled with other features from other teams.
We have too many domains (five) to cover. Even with CodeQuality gone, we still switch context all the time.
We have a new recruiter for the Secure Team, and new requirements for the candidates. We will target pure Ruby On Rails developers for the open positions.
We should have a maintainer in the team.
Put in place the rotation on our Release Manager. We need more people in the team for that.
Reduce our technical debt, as we add more members in the team.
Managed to have 2 retros across a very wide timezone range [-7 UTC,+13 UTC]
Hired a new backend developer (Thong) and a new frontend engineer (Jacques)
Rotating daily standup led to people getting to know each other better
Got useful feedback about Auto DevOps by working with the support team
A lot more collaboration between cross functional teams from discovery through to implementation
Managed to ship Auto DevOps enabled by default for all customers finally
Our product vision for 2019 looks really exciting and cutting edge
Our asynchronous communication has improved in order to face the challenge of our timezone range
Made some progress towards RBAC support which is a feature that has been requested for a long time
Shipped protected environments which is a first major improvement to support operators
Our features still have a very difficult set up process for local development
We are not prioritizing our user research findings from Auto DevOps users
It seems hard to see the impact of our work due to unknown (but seemingly small) user base
Issues still seem to be too large and often take a whole month or slightly more to finish
Critical features for Auto DevOps have still not been implemented
Not enough hires for backend roles leading to not enough capacity to reach our ambitious goals
Missed deliverables occur most months
QA test coverage for our products has not improved in some time
Auto DevOps on by default has received a great deal of negative feedback from customers
Still have not managed to enable Auto DevOps by default on gitlab.com due to our CI infrastructure not being scalable enough
Still have not managed to ship RBAC support which is critical for adoption of our K8s integration by most organisations
Communication via multiple channels (slack, MR, issue) for the same topic is difficult to follow
Async retros so that we do not need somebody to stay up very late to attend retro
Work more closely with Quality team to improve our test coverage
Get closer to our users by regularly working with support and reading zendesk issues from customers
Working closely with sourcing to get many candidates that have very specific skills and interest in Ops and Kubernetes to be able to better sell our team
Use threaded discussions on GitLab issues to discuss at a higher bandwidth on GitLab
Try out Geekbot so we stay in touch even though our rotating standups are only twice a week
Beer call once a quarter to get to know each other even better
Focus and speciality within the group. Dev Stage dedicated test automation counterparts and Developer Productivity functions have booted up.
Setup sub team weekly meetings for collaboration.
Implemented triage-package to scale out triage issues to all engineering teams.
No assigned resources for ops stage.
Its a challenge to navigate and prioritize all the work have since we work in multiple projects.
Did not reach hiring goal.
As we are adding more tests, suites are taking longer to run.
Accidentially committed to 4 OKRs, rather than 3 :)
Setup project management process and tooling in a common way for all Quality projects.
Initiate better long term planning (roadmap) before epic/issue creation.
Come up with a simple MVC for test parallelization in a simple MVC manner
All Q3 Goals were met.
We hired 5 Security Engineers in Q3.
For S1 security vulnerabilities, our MTTR averages under 30 days, below security industry standards.
We have delivered many FGUs, which helps to drive overall awareness of security initiatives at GitLab.
Our MTTR for S2 security vulnerabilities needs improvement next.
Both critical and regular security release processes could use more automation.
Timezone coverage for Security team is still mostly concentrated in US and EU zones.
Q3 goals not fully reflecting the breadth of security domain deliverables.
Hire security engineers in APAC timezone to increase coverage for security incident response.
Increase Q4 goals to cover more breadth of Security team initiatives.
Use FGUs as a forum to drive accountability throughout GitLab, to improve MTTRs for security vulnerabilities.
Preparation in advance of Summit to ensure positive customer experience while maximizing support team engagement.
Positive progress on exposing key Support metrics into Corporate metrics.
Global collaboration on staying on top of our hiring plan.
Length of time to source and screen excellent Support Engineering Manager candidates in APAC.
Experience of losing our first voluntary termination.
SLA's for self-managed customers continue to be below goal
Some high profile customers/propspects had a disruptive experience during the summit
Establishing a more streamlined candidate to hire process.
Clarify expectations for each level of Support Engineering and Support Agent job roles to complement career growth.
Ramp up sourcing/hiring in APAC
Partner with sales to take especially great care of important customers/propsects prior to renewels and new contracts
Support - Self-Managed
New Hires continue to make an impact quickly
Senior Engineers are focusing on deep performance issues and surfacing problems (Gitaly/NFS).
Small Premium customers are starting to generate too many tickets.
We haven't leveraged ticket priorty as much as we should.
Our bootcamps have atrophied
Knowledge is getting siloed
Work with Customer Success to improve onboarding for ALL premium customers
Shore up our ticket priority workflows
Build a process to verify/enhance bootcamps
Encourage the team (seniors specifically) to share more in a group setting
Support - Services
Hit stride in hiring: additional headcount matches volume well.
Worked cross-team with Security, Accounts and SMB Team on improving process
Ticket volume for .com customers is at a level that a miss severly affects SLA performance
GitHost app stopped upgrading customer instances after a bad version was posted on version.gitlab.com
Revisiting breach notifications to ensure we aren't missing tickets because of visibility
Encouraging agents to do adhoc pairing sessions for learning and reducing the bystander effect
We hired two excellent UX Designers (Amelia) and soon to be announced!
Our hiring pipeline is strong with highly qualified candidates.
Despite an increasing number of deliverables per milestone and unexpected UX needs for the 2019 vision, we were able to achieve 100% and 90% respectively on our two OKRs.
Our department's comradery and ability to remain aligned and connected has not been impacted by the company and department's rapid growth.
Designers embedded in cross-functional teams (stable counterparts) has enhanced collaboration and allowed the UX department to dig deeper into existing features.
We added a UX Vision to our department handbook, setting the tone and direction for all of our efforts.
Design pattern library and design system issues take a long time to review and merge.
Our old UX guide is still live as not all of it's material have been moved to design.gitlab.
Design discussions still feel fragmented across multiple channels (issues, MR, slack, sync calls).
Async retros for the UX department (separate from group retros) to surface shared problems and solutions.
Aggressively break down and iterate on design pattern library and design system issues.
Archive the old UX guide and make design.gitlab the SSOT for UX standards and guidelines.
Investigate ways to make design discussion a first-class citizen in GitLab.
We created 100% of personas that were requested by UX Designers or the Product team.
We conducted 62 user interviews which lead to the creation of 6 new personas.
Emily von Hoffmann and Andy Volpe supported UX Research considerably by conducting and analysing user interviews. We couldn't have achieved this OKR without their help.
Product Marketing were very supportive of our efforts. They actively participated in key meetings and helped us shape the personas' format and content.
Despite the disappointing low response rate to the survey, the data we collected provided insight into who qualifies as a churned GitLab.com user and how users first interact with GitLab. We also managed to triangulate the data with user interviews and provide Product with a provisional list of pain points for further exploration.
In order to identify 5 pain points for users who have left GitLab.com, we created a survey to send to churned users. We distributed the survey to 8000+ churned GitLab.com users whose details were supplied to us by Product. Unfortunately, the survey only received 126 partial responses. Of those responses, 33% of users confirmed that they were in fact still using GitLab. The recipient list was inadequate for the purposes of our research. This isn't something Product could have foreseen.
Closer collaboration with the Product team when creating OKRs.