This is a Controlled Document
Inline with GitLab's regulatory obligations, changes to controlled documents must be approved or merged by a code owner. All contributions are welcome and encouraged.
The purpose of the Security Operational Risk Management (“StORM”) program at GitLab is to identify, track, and treat security operational risks in support of GitLab's organization-wide objectives. The Security Risk Team utilizes the procedures below to ensure that security risks that may impact GitLab's ability to achieve its customer commitments and operational objectives are effectively identified and treated.
The scope of the StORM program is limited to operational (also referred to as Tier 2) risks as defined in the NIST SP 800-30 Rev. 1 risk management hierarchy below. These risks are generally identified during the Annual Risk Assessment(ARA) or Ad-Hoc reports.
Out of Scope Risks, such as operational risks that don't impact Security, Third Party Vendor risk, and Information System deficiencies, are managed through separate processes. However, observations noted at the Tier 3 level have the potential to escalate to a Tier 2 Risk based on a Control Health & Effectiveness Rating (CHER).
A risk governance structure has been put in place to outline the overall roles and responsibilities of individuals as it relates to StORM. The current governance structure is:
Role | Responsibility |
---|---|
Risk Owners | Makes decisions for their specific organizations * Provides insight into the day-to-day operational procedures executed by their organization in support of Risk Treatment planning * Responsible for driving risk acceptance and/or implementing remediation activities over the risks identified |
Security Risk Team | Coordinates and executes the annual risk assessment * Maintains the risk register and tracks risks through treatment * Acts in a Program Management capacity to support the tracking of risk treatment activities * Coordinates peer validation testing after all risk remediation activities have been completed |
Manager of Security Risk | Provides management level oversight of the StORM program, including continuing reviews of GitLab's Risk Register and acts as a point of escalation as needed |
Director of Security Assurance | Provides senior leadership level oversight of the StORM program, including a review and approval of the annual risk assessment report |
VP of Security | Executive sponsor of StORM program, performs a final review and approval of the annual risk assessment report |
Senior Leadership | Sets the tone of the risk appetite across the organization * Drives direct reports in their respective business units to comply with the StORM program |
GitLab Team Members (Employees and Contractors) | Comply with the StORM program policies and procedures |
Security Assurance Management (Code Owners) | Responsible for approving significant changes and exceptions to this procedure |
Tone at the Top: GitLab's StORM methodology uses a defined Risk Appetite and Risk Tolerance as the primary drivers to determine what risks GitLab are willing to accept versus what risks we will need to treat. These thresholds are defined by Senior Leadership across the organization to ensure the Tone at the Top is aligned with the StORM program. Risk Appetite and Tolerance are reassessed year-to-year during the annual security operational risk assessment process. This is done through an annual Risk Appetite Survey based on the ISO 31000 Risk Management Methodology. The survey is distributed to individuals operating in a Senior Leadership capacity with direct relations to Security Operations. The responses are averaged to arrive at an overall risk appetite and tolerance.
In order to effectively identify, manage, and treat operational risks, GitLab has defined a set of threat source categories alongside specific risk factors and risk scoring definitions. Based on these threat sources, various stakeholders across the organization will be identified to participate in the Risk Identification phase. For details on the identified threat sources and example threat events, refer to the StORM Methodology page.
The Security Risk Team conducts security operational Risk Identification interviews with individuals operating in at least a Manager capacity/level at GitLab in order to identify security operational risks within their respective departments. Risks identified will always be framed in terms of threat sources and threat events, and then assessed against the likelihood of occurrence and the impact to GitLab if the risk event occurs. Additionally, these risks will be assessed against the current internal controls in place to determine the overall residual risk remaining.
For details of the scoring methodology used, refer to the StORM Methodology page. For guidance on drafting risk language see the Risk Drafting Guidance below. Risks will be quality reviewed by the Security Risk Manager or delegate and approval captured via comment in the GRC application.
Risks identified through the Risk Identification phase are formally tracked via an internal risk register. Given the nature of the sensitivity of this information in aggregate, the risk register is not made public, and is not distributed externally. However, a publicly viewable GitLab Risk Register Template is available here for those interested in getting some more insight into the type of information tracked in GitLab's risk register. StORM related risk activities are centralized within GitLab's GRC tool, ZenGRC. Additional information on the various risk related activities carried out of ZenGRC can be found on the ZenGRC Activities handbook page.
For each risk identified above, a formal risk treatment decision is made to determine how GitLab will handle the risk. For details of the risk treatment options available, refer to the StORM Methodology page. Note that as part of the risk treatment procedures, the Risk Owner will make a determination on whether or not to accept a risk or pursue remediation based on our Risk Appetite and Tolerances. Treatment plans will be reviewed by the Security Risk Manager or delegate and approval captured via comment in the GRC application.
Once the annual security operational risk assessment is completed, an executive and detailed report is prepared:
There may be times that risks are identified outside of the annual StORM process - such as risks that arise from a security incident, risk identified through regular day-to-day business operations, etc. All security operational risks identified ad-hoc are discussed with the Security Risk Team, an inherent risk score is assigned, and a quantitative analysis done to determine if it should be escalated to the risk register.
On an annual basis, the Security Risk Team performs an analysis of security tech debt to support GitLab's ability to respond to emerging threats.
Technical debt is a pattern in which a development team does not have enough time, information, or capacity to refine and refactor their code, so their architecture, implementation, and testing may be incomplete. Tech Debt can also be used to describe IT systems and applications that are not effectively enabling the achievement of our mission and goals.
Examples of Tech Debt include systems/apps that:
Systems/apps that support and/or enable GitLab's security controls are in-scope for the purposes of the Tech Debt Analysis. A list of in-scope systems can be found here. This list is supplemented by other tools that are owned outside of Security (e.g., Okta or NIRA) that can be found in the Tech Stack.
The Security Risk Team will send each Directly Responsible Individual (DRI) of a security control-enabling system/app a separate Tech Debt Questionnaire. This brief questionnaire requests the DRI's input on topics such as:
As DRIs complete questionnaires, the Security Risk Team will review responses to assess whether the system/application represents a risk to GitLab. The information collected will also help to support decision-making from a budget/investment perspective.
TBD
To assess newly acquired/developed systems that enable security controls OR are/may be in scope for compliance programs for potential inclusion into our GitLab Control Framework (GCF) and compliance programs (e.g., Security Compliance Program and SOX Program).
Our goal is to identify systems that enable security controls (e.g., access management system) OR systems that are (or may be) subject to regulatory (e.g., SOX) or compliance requirements (SOC2) as early as possible via our Third Party Risk Management (TPRM) Program. As we engage with third parties for new systems, we assess the use of the system and whether or not it meets the criteria described above. Existing systems can also be ingested into the Security Compliance Intake process. Examples of these could include systems whose funcionality has expanded to support security controls or instances where our understanding of a security control has improved resulting in the identification of a previously uncredited supporting system.
If the system meets the criteria, we open up a new Security Compliance Intake Issue.
Security Compliance Intake Issue asks the author to include details related to the system including:
Once the Security Compliance Intake issue is populated, Security Risk assigns the issue to the Security Compliance team to complete the following tasks to incorporate the system into our Security Compliance Program:
There are multiple ways the team can be engaged for risk:
#security-risk-management
Slack channelRisk Escalation
workflow by clicking on the blue lightning bolt in the bottom right corner of the message box and selecting Risk Escalation
risk::escalation
. The Security Risk Team will monitor and triage issues or MRs that have this label applied.
gitlab-com/gl-security/security-assurance/risk-field-security-team
on the issue or MRStORM Program considerations include both risks (what might happen) and observations (what has happened/non-compliance). For guidance on writing observations, please refer to Observation Management Procedure Handbook page.
When drafting a risk, start with a risk statement. This will represent the title of the Risk in our GRC system and is an attempt to condense the risk into a single sentence. In the spirit of low-context communication, avoid using single words or short phrases for the risk statement (e.g., Supply Chain). As we largely deal with negative risks (vs. positive risks/opportunities), starting the statement with negative language like "Failure to", "Inadequate", "Incomplete", "Lack of", etc. is appropriate, but not required. As risks represent what might happen, use "may" before describing the negative effect it may have on the confidentiality, integrity, availability, security, and privacy of GitLab data. Example: Inadequate physical security controls may result in the loss of GitLab/Customer data and physical assets. The risk description should contain details related to the assets/resources at risk, the event that may occur, the source that would trigger the event (root cause), and the consequence (impact/loss) source.
As per GitLab's Communication Page, information about risks tracked in GitLab's Risk Register defaults to not public and limited access. Given the nature of risk management, GitLab will always be susceptible to risks. The goal of implementing risk treatment plans and carrying out risk remediation activities is to reduce the likelihood or impact (or both) of a risk occurring. Given that no risks identified can ever be fully eliminated, but instead are mitigated through reduction of likelihood and/or impact, risks that have been escalated to GitLab's Risk Register will be shared on a need-to-know basis.
The only exceptions to this procedure are those risks that are out of scope (as defined above).