Gitlab hero border pattern left svg Gitlab hero border pattern right svg

Security Department Performance Indicators

On this page

Executive Summary

KPI Health Reason Next Steps
Hiring Actual vs Plan Okay Engineering is on plan. But we are lending some of our recruiters to sales for this quarter. And we just put in place a new "one star minimum" rule that might decrease offer volume.
  • Health: Monitor health closely
  • Maturity: Get this into periscope
  • MTTM (Mean-Time-To-Mitigation) for S1-S2-S3 security vulnerabilities Okay Currently, our MTTM metrics show that we are effective.
  • Link to issues in gitlab-ce with security and S1 labels
  • Link to Issues in gitlab-ce with security and S2 labels
  • Link to Issues in gitlab-ce with security and S3 labels
  • We are able to chart this data effectively at this point, with the Splunk instance.
  • We are effective at maintaining the MTTM for S1/S2/S3 vulnerabilities to 30/60/90 days or less consistently.
  • Blocked Abuse Activity Attention Although not automated, the cloud spend cost savings from mitigating Abuse activity is promising.
  • link to url that outlines that between 01 April 2019 - 09 August 2019, Abuse Mitigation strategies saved us at least $248,708.48 in cloud spend.
  • Create more automation/dashboarding around these metrics without having to do this through Google Sheets
  • Is this something that Periscope could help with? If so, we should work with the Data Team to assess whether that’s the right tool for this.
  • HackerOne budget vs actual (with forecast of 'unknowns') Problem We are over budget
  • Link to url that explains thatHackerOne spending update to April 2019. Total actuals in 2019 so far - $377,280.
  • We need to brainstorm with Finance on how we can get creative with predicting the ‘unknown’. There has to be known financial models that are specific to forecasting the ‘unknown unknowns’, but I don’t have the Finance background to do this by myself.
  • This needs to be direct input from Finance for next steps. On Security’s end, we’ve gathered all of the data that we can possibly gather at this point.
  • Key Performance Indicators

    Hiring Actual vs Plan

    Are we able to hire high quality workers to build our product vision in a timely manner? Hiring information comes from BambooHR where employees are in the division `Engineering`.

    URL(s)

    Health: Okay

    Engineering is on plan. But we are lending some of our recruiters to sales for this quarter. And we just put in place a new "one star minimum" rule that might decrease offer volume.

    Maturity: Level 2 of 3

    We have charts driven off of team.yml

    Next Steps

    MTTM (Mean-Time-To-Mitigation) for S1-S2-S3 security vulnerabilities

    The MTTM metric is an indicator of our efficiency in mitigating security vulnerabilities, whether they are reported through HackerOne bug bounty program (or other external means, such as security@gitlab.com emails) or internally-reported. The average days to close issues in the GitLab CE project (project_id = '13083') that are have the label `security` and S1, S2, or S3; this excludes issues with variation of the security label (e.g. `security review`) and S4 issues. Issues that are not yet closed are excluded from this analysis. This means historical data can change as older issues are closed and are introduced to the analysis. The average age in days threshold is set at the daily level.

    URL(s)

    Health: Okay

    Currently, our MTTM metrics show that we are effective.

    Maturity: Level 3 of 3

    We already have a Splunk instance that is ingesting all gitlab-com and gitlab-org issues and can visualize this data in dashboards. Currently, we are working with the Data Team to get this data into Periscope.

    Next Steps

    Blocked Abuse Activity

    This metric is focused around the efficacy of our Abuse Handling and Abuse Operations. The Abuse Team is responsible for tracking, automating, and reporting on blocked abusive accounts on GitLab.com, and tracking reduction in cloud spend where the activities are due to abuse.

    URL(s)

    Health: Attention

    Although not automated, the cloud spend cost savings from mitigating Abuse activity is promising.

    Maturity: Level 2 of 3

    Currently, we are relying on Google spreadsheet tracking/pivoting to obtain these metrics. It’s a more manual process than we’d like, and we’d like to gain more automation with these measurements and charting/visualization.

    Next Steps

    HackerOne budget vs actual (with forecast of 'unknowns')

    We currently run a public bug bounty program through HackerOne, and this program has been largely successful - we get a lot of hacker engagement, and since the program went public, we have been able to resolve nearly 100 reported security vulnerabilities. The bounty spend is however, a budgeting forecast concern because of the unpredictability factor from month to month.

    URL(s)

    Health: Problem

    We are over budget

    Maturity: Level 2 of 3

    We are starting to put together forecast models, but this is not a fully mature process. We’re also reached out to HackerOne, and they’ve provided us with a cost projection chart that we use to ‘predict’ how the spending trend will change over the lifetime of this program, but that’s data obtained from other HackerOne bounty programs, and doesn’t cleanly map to our program.

    Next Steps

    Regular Performance Indicators

    Diversity

    Diversity & Inclusion is one of our core values, and a general challenge for the tech industry. GitLab is in a privileged position to positively impact diversity in tech because our remote lifestyle should be more friendly to people who may have left the tech industry, or studied a technical field but never entered industry. This means we can add to the diversity of our industry, and not just play a zero-sum recruiting game with our competitors.

    URL(s)

    Health: Attention

    Engineering is now at the tech benchmark for gender diversity (~16%), but our potential is greater and we can do better. 20% should be our floor in technical roles. Other types of diversity are unknown.

    Maturity: Level 2 of 3

    The content is shared only in a closed metrics review, and does not have granularity. It’s not visualized, or in time series.

    Next Steps

    Handbook Update Frequency

    The handbook is essential to working remote successfully, to keeping up our transparency, and to recruiting successfully. Our processes are constantly evolving and we need a way to make sure the handbook is being updated at a regular cadence.

    URL(s)

    Health: Unknown

    Unknown. But my sense is we are not doing enough. For instance, we have not been able to fully update the handbook after the development department re-org (dev backend, and ops backend are still present. Although many of the new teams do have their own pages already)

    Maturity: Level 2 of 3

    We currently just have contribution graphs, which are a poor proxy for this.

    Next Steps

    Team Member Retention

    People are a priority and attrition comes at a great human cost to the individual and team. Additionally, recruiting (backfilling attrition) is a ludicrously expensive process, so we prefer to keep the people we have :)

    URL(s)

    Health: Okay

    I seem to recall our attrition is now below 10% which is great compared to the tech benchmark of 22% and the remote benchmark for 16%, but the fact that I can’t just look at a simple graph makes me nervous...

    Maturity: Level 2 of 3

    There is manually curated data in a spreadsheet from PO

    Next Steps

    HackerOne outreach and engagement

    The true mark of a successful public bug bounty program is partially tied to hacker outreach and engagement. We’ve already implemented quite a bit of security automation to engage hackers. For example, when findings are submitted by hackers, an automated dynamic message is generated by our automation scripts and they receive a response with expected ETA on a reply (this ETA is dependent on how many findings are in our ‘to be triaged’ bucket at that time, so the ETA changes). This ensures the hacker knows 1) we received the report, 2) a human will be reading the report within a certain ETA, and 3) a human will be replying within an ETA. Our program went public in mid-December 2018, and we are now focusing on outreach and incentive programs to retain the top hackers to our program - about 90%+ of the most impactful reports are currently submitted by our Top 5-10 hackers, so it is imperative we retain those individuals.

    URL(s)

    Health: Attention

    External communications has started broader engagement efforts leveraging blogs and social media to increase awareness around our bug bounty program across the industry and, specifically, within the reporter community.

    Maturity: Level 3 of 3

    We have built a lot of automation to engage hackers on our program, but we need to now focus more on outreach, awareness and engagement via bug bounty-focused content development, social media engagement and incentivization of top hackers in the GitLab Bug Bounty Program.

    Next Steps

    Security Response Times

    Although the Security Department is always working towards minimizing friction to day-to-day company operations, it is never possible to be 100% frictionless when adding an additional step into an existing process. As cumbersome as this may feel (despite our best efforts), this is usually a necessary step to maturing our security posture and meeting customer requirements (especially large enterprise customers). We plan to measure Security response times to new processes such as access request approval issues and security reviews for 3rd party vendor requests to highlight where bottlenecks are to the current process, and use a data-driven approach to iterate on better processes.

    URL(s)

    Health: Unknown

    TBD

    Maturity: Level 0 of 3

    TBD

    Next Steps

    Other PI Pages

    Legends

    Maturity

    Level Meaning
    Level 3 of 3 Measurable, time series, identified target for the metric, automated data extraction, dashboard in Periscope available to the whole company (if not the whole world)
    Level 2 of 3 About two-thirds done. E.g. Missing one of: automated data collection, defined threshold, or periscope dashboard.
    Level 1 of 3 About one-third done. E.g. Has one of: automated data collection, defined threshold, or periscope dashboard.
    Level 0 of 3 We only have an idea or a plan.

    Health

    Level Meaning
    Okay The KPI is at an acceptable level compared to the threshold
    Attention This is a blip, or we’re going to watch it, or we just need to enact a proven intervention
    Problem We'll prioritize our efforts here
    Unknown Unknown

    How to work with pages like this

    Data

    The heart of pages like this is a data file called /data/performance_indicators.yml which is in YAML format. Almost everything you need to do will involve edits to this file. Here are some tips:

    Pages

    Pages like /handbook/engineering/performance-indicators/ are rendered by and ERB template.

    These ERB templates call the helper function performance_indicators() that is defined in /helpers/custom_helpers.rb. This helper function calls in several partial templates to do it's work.

    This function takes a required argument named org in string format that limits the scope of the page to a portion of the data file. Possible valid values for this org argument are listed in the orgs property of each element in the array in /data/performance_indicators.yml.