Gitlab hero border pattern left svg Gitlab hero border pattern right svg

Security Department Performance Indicators

Executive Summary

KPI Health Reason(s)
Security Hiring Actual vs Plan Okay
  • Health: Monitor health closely
  • Security Average Location Factor Attention
  • Security Operations having challenges pulling quality candidates in geo-diverse locations, but still managed to get a very promising hire in APAC.
  • Security Compliance has been skewing towards US-based candidates because the short term compliance goals of GitLab are based primarily on frameworks/requirements from this area.
  • Abuse Operations having challenges pulling quality candidates due to salary bands.
  • MTTM (Mean-Time-To-Mitigation) for S1-S2-S3 security vulnerabilities Okay
  • MTTM metrics continue to show the effectiveness of S1/S2/S3 labelling and follow-up in GitLab project
  • Security Automation has implemented an Escalation Engine to escalate S1/S2/S3 issues that do not have clear milestone and resolution times. This is ensuring good hygiene and appropraite prioritisation of security issues.
  • Blocked Abuse Activity Attention
  • Abuse tooling is mature and using automation to flag and block the majority of Abuse activity.
  • Machine Learning Engine is in place to automate detection of abusive accounts and activity on specific GitLab features. Currently operating at around 98.6% accuracy. Machine Learning is being expanded to cover more GitLab features.
  • Abuse Operations is working with Defend to ensure that existing and upcoming tooling ends up being part of the product with no duplication of efforts among the teams.
  • Reporting is available, but reporting is not in Periscope.
  • HackerOne budget vs actual (with forecast of 'unknowns') Attention
  • H1 spend has decreased by >60% from Q1 to Q4, from $180,150 (Q1) to $63,000 (Q4)
  • Total for 2019 at $497,900
  • We need to brainstorm with Finance on how we can get creative with predicting the ‘unknown’. There has to be known financial models that are specific to forecasting the ‘unknown unknowns’, but I don’t have the Finance background to do this by myself.
  • This needs to be direct input from Finance for next steps. On Security’s end, we’ve gathered all of the data that we can possibly gather at this point.
  • We significantly bumped HackerOne bounties for "critical" and "high" severity findings on Nov 18th, 2019. In the future we're planning on adjusting these according to revenue using a gear ratio on a quarterly basis.
  • Key Performance Indicators

    Security Hiring Actual vs Plan

    Are we able to hire high quality workers to build our product vision in a timely manner? Hiring information comes from BambooHR where employees are in the division `Engineering`.

    URL(s)

    Health: Okay

    Maturity: Level 1 of 3

    Security Average Location Factor

    We remain efficient financially if we are hiring globally, working asynchronously, and hiring great people in low-cost regions where we pay market rates. We track an average location factor by function and department so managers can make tradeoffs and hire in an expensive region when they really need specific talent unavailable elsewhere, and offset it with great people who happen to be in low cost areas.

    URL(s)

    Health: Attention

    Maturity: Level 1 of 3

    MTTM (Mean-Time-To-Mitigation) for S1-S2-S3 security vulnerabilities

    The MTTM metric is an indicator of our efficiency in mitigating security vulnerabilities, whether they are reported through HackerOne bug bounty program (or other external means, such as security@gitlab.com emails) or internally-reported. The average days to close issues in the GitLab CE project (project_id = '13083') that are have the label `security` and S1, S2, or S3; this excludes issues with variation of the security label (e.g. `security review`) and S4 issues. Issues that are not yet closed are excluded from this analysis. This means historical data can change as older issues are closed and are introduced to the analysis. The average age in days threshold is set at the daily level.

    URL(s)

    Health: Okay

    Maturity: Level 1 of 3

    Blocked Abuse Activity

    This metric is focused around the efficacy of our Abuse Handling and Abuse Operations. The Abuse Team is responsible for tracking, automating, and reporting on blocked abusive accounts on GitLab.com, and tracking reduction in cloud spend where the activities are due to abuse.

    URL(s)

    Health: Attention

    Maturity: Level 1 of 3

    HackerOne budget vs actual (with forecast of 'unknowns')

    We currently run a public bug bounty program through HackerOne, and this program has been largely successful - we get a lot of hacker engagement, and since the program went public, we have been able to resolve nearly 100 reported security vulnerabilities. The bounty spend is however, a budgeting forecast concern because of the unpredictability factor from month to month.

    Health: Attention

    Maturity: Level 1 of 3

    Regular Performance Indicators

    HackerOne outreach and engagement

    The true mark of a successful public bug bounty program is partially tied to hacker outreach and engagement. We’ve already implemented quite a bit of security automation to engage hackers. For example, when findings are submitted by hackers, an automated dynamic message is generated by our automation scripts and they receive a response with expected ETA on a reply (this ETA is dependent on how many findings are in our ‘to be triaged’ bucket at that time, so the ETA changes). This ensures the hacker knows 1) we received the report, 2) a human will be reading the report within a certain ETA, and 3) a human will be replying within an ETA. Our program went public in mid-December 2018, and we are now focusing on outreach and incentive programs to retain the top hackers to our program - about 90%+ of the most impactful reports are currently submitted by our Top 5-10 hackers, so it is imperative we retain those individuals.

    Health: Attention

    Maturity: Level 1 of 3

    Security Response Times

    Although the Security Department is always working towards minimizing friction to day-to-day company operations, it is never possible to be 100% frictionless when adding an additional step into an existing process. As cumbersome as this may feel (despite our best efforts), this is usually a necessary step to maturing our security posture and meeting customer requirements (especially large enterprise customers). We plan to measure Security response times to new processes such as access request approval issues and security reviews for 3rd party vendor requests to highlight where bottlenecks are to the current process, and use a data-driven approach to iterate on better processes.

    Health: Unknown

    Maturity: Level 1 of 3

    Other PI Pages

    Legends

    Maturity

    Level Meaning
    Level 3 of 3 Has a description, target, and periscope data.
    Level 2 of 3 Missing one of: description, target, or periscope data.
    Level 1 of 3 Missing two of: description, target, or periscope data.
    Level 0 of 3 Missing a description, a target, and periscope data.

    Health

    Level Meaning
    Okay The KPI is at an acceptable level compared to the threshold
    Attention This is a blip, or we’re going to watch it, or we just need to enact a proven intervention
    Problem We'll prioritize our efforts here
    Unknown Unknown

    How to work with pages like this

    Data

    The heart of pages like this is a data file called /data/performance_indicators.yml which is in YAML format. Almost everything you need to do will involve edits to this file. Here are some tips:

    Pages

    Pages like /handbook/engineering/performance-indicators/ are rendered by and ERB template.

    These ERB templates call the helper function performance_indicators() that is defined in /helpers/custom_helpers.rb. This helper function calls in several partial templates to do it's work.

    This function takes a required argument named org in string format that limits the scope of the page to a portion of the data file. Possible valid values for this org argument are listed in the org property of each element in the array in /data/performance_indicators.yml.