GitLab's internal red team extends the objectives of penetration testing by examining the security posture of the organization and their ability to implement effective cyber defenses.
Penetration testing is a specialized type of assessment conducted on information systems or individual system components to identify vulnerabilities that could be exploited by adversaries. Such testing can be used to either validate vulnerabilities or determine the degree of resistance organizational information systems have to adversaries within a set of specified constraints (e.g., time, resources, and/or skills).
Red team exercises provide more comprehensive assessments that reflect real-world conditions over penetration testing. The exercises can further be used to improve security awareness and training and to assess levels of security control effectiveness. GitLab utilizes NIST 800-53 Revision 4 security control CA-8 to define the Red Team and their mission. The control can be found on NIST.gov.
The Red Team operates under a pre-defined set of rules of engagement. The rules of engagement exist to inform GitLab's team members on how the team operates during engagements. It provides guidelines for determining scope, the ethics we employ during our engagements, how we collaborate as a security team, and how we escalate vulnerabilities and exploits we discover during those engagements.
Further details can be found in the job family description.
Most Red Team operations are planned and approved before any actions are conducted (see Red Team Open Scope for exceptions). Each operations consists of the following steps:
Operations should have a clear goal, generally in alignment with one of the following:
Any GitLab team member can participate in the proposal or discussion of an upcoming operation. To do so, open an issue in Red Team Operations and choose the issue template called
The issue template will guide you through the process of planning an operation. Make sure that the following two labels are assigned to the issue for organization and search-ability:
Red Team Operation::Proposal.
Once the issue above has been hashed out, and the team agrees that it should be conducted as an official operation, the proposal should be promoted to an epic in our Red Team group. This gives us increased functionality, like the ability to add additional issues for threat models, final reports, recommendations, etc. Not all proposals will end up promoted to an epic - some may remain as individual issues while still being working on outside the scope of a full-scale operation.
The threat model should be the foundation that the operation is built upon. Red Team operations typically focus on accessing a specific classification of data by exploiting complex interconnected systems, making an asset-based threat model most appropriate. However, if an operation is focused on an individual application, it might make sense to leverage the PASTA framework as outlined here.
A new issue should be created from the operation's epic using the template called
red_team_threat_model and landing in the Red Team Operations project.
The template will provide detailed instructions on what to include.
Ongoing operations are documented in a GitLab-managed instance of Vectr. This tool allows us to track at a very granular level the specific Tactics, Techniques, and Procedures (TTP) used in an operation. Each TTP (known as a "Test Case" inside Vectr) can be can be analyzed to determine what was detected, blocked, and/or logged after the operation is complete.
New operations can be documented as follows:
Attempting to plan out test cases in Vectr ahead of time helps us understand the offensive strategies that will be used. This, in turn, helps us understand the specific security controls we expect to encounter in terms of detection, response, and logging. It also encourages a broader usage of techniques from the MITRE ATT&CK framework, which we use to ensure our security team has a broad exposure to realistic attack scenarios.
Logging every action in a single campaign inside Vectr may produce a complex attack path, particularly for a chain of events that cycles through a traditional killchain multiple times. That's ok. It's best to get everything logged as you go. Once an operation is complete, it may make sense to use the "clone" functionality to split some actions out into separate, logical attack paths.
We use Mitre Caldera to build automated adversaries capable of conducting specific attack abilities. Automating as much as possible allows us to repeat attacks on demand while testing and improving our detection and response capabilities.
When automation doesn't make sense, Caldera may still be useful as a traditional Command and Control (C2) platform with interactive reverse-shell capabilities.
We host a clone of Caldera's Stockpile submodule in a private project that we will periodically update to the current upstream release. For each new operation, we will create a new branch of the project and perform the following actions:
/data/adversariesthat includes all the abilities we plan to use and develop
This then allows us to easily spin up new Caldera instances as needed, simply updating the Stockpile submodule to our own project. An example is shown below:
# Clone the upstream Caldera installation git clone https://github.com/mitre/caldera --branch 3.1.0 # Update it to use our private Stockpile project cd caldera git config --file=.gitmodules submodule.plugins/stockpile.url "email@example.com:[PATH-TO-PRIVATE-STOCKPILE]" git config --file=.gitmodules submodule.plugins/stockpile.branch "[BRANCH-FOR-OPERATION]" # Download the required submodules git submodule update --init --recursive --remote # Install python requirements pip3 install -r requirements.txt # Start server (configure or use existing conf/local.yml first) python3 server.py
Once the Red Team has completed the planned offensive operations, the assigned members of the Blue Team will work through each test case in Vectr to fill in the following:
If a test case is marked as "Not Detected", it needs to be considered for remediation - especially if there was a specific "Expected detection layer" marked inside Vectr. Some test cases may also expose a specific vulnerability in a product or configuration - these should also result in an issue being opened.
These issues will generally be opened in the Security Operations issue tracker and labeled with the following:
Some mitigating issues will be created by others in arbitrary project locations. It's important to note that an issue can only be linked to an epic should its containing project be underneath the epic's hierarchy. Therefore, it's best to link any mitigating issues to the report issue versus the epic for consistency.
Vectr provides a wide range of reports to provide insight on detection and response capabilities. These can be exported and shared on request.
Creating a report is done by creating an issue from the operation's epic using the template called
red_team_report and landing in the Red Team Operations project. This issue should have the label assigned:
Red Team Operation::Report for organization and search-ability. The template contains the following:
Most of these reports will be classified YELLOW, meaning they will not be made publicly available. The Red Team does support GitLab's core value of transparency but must ensure not to introduce risk to GitLab, GitLab customers, or GitLab business partners. When possible, we will share techniques and tooling via our Tech Notes.
It's important to note that newly created epics are public by default. Care should be taken that new epics are marked as confidential if they or any of the child epics or issues expose yellow data. If an epic is marked confidential, all of its children must be marked confidential as well.
The Red Team follows GitLab engineering's recommendation to perform regular retrospectives. These may be performed on a regular cadence or, at minimum, just prior to completing an individual operation depending on need. The objective is to improve the performance of the team by taking an honest look at what went well, what went poorly, and specific, actionable takeaways to iteratively improve the team in terms of our technical and collaborative skills.
The steps are all outlined in the link above. A new issue should be opened on the operation's project to ensure the retrospective is completed.
There are multiple reasons to iterate over past/completed operations - the GitLab environment is changing, new attack techniques are discovered, the Blue Team has improved detection capabilities and want to re-assess those regarding past scenarios/operations, etc.
In those situations, the Red Team can simply clone either a full assessment or an individual campaign inside Vectr. This way, it is possible to iterate over past operations, make appropriate changes to the new versions and continue the cycle described above. Operations may therefore have multiple versions, and Vectr reporting will automatically gather the metrics to show progress over time.
The Red Team will develop new adversary emulation techniques on a regular basis, both during official operations as well as informal open-scope activities.
When a technique has been proven effective, the Red Team will configure any existing automation around this technique to publish messages using Google Cloud Pub/Sub. These messages can then be ingested by the SIEM to generate alerts and integrate into the standard process of responding to known risks.
For example, the Red Team may create a bot that logs into development instances and attempts to exploit a specific configuration. Once the risk has been proven and existing detection/response capabilities have been tested, it is time for the technique to be fully disclosed internally.
While this may result in product fixes or infrastructure changes, it is possible that vulnerable configurations may reappear in the environment. The bot can continue to run at scheduled intervals, but will be enhanced to publish a message to Google Cloud that will have a corresponding SIEM alert. At this point, SIRT will respond to new occurrences and the Red Team will no longer attempt exploitation.
Some activities are considered open-scope, meaning that they can be conducted at any time, from any source IP address, and against any GitLab-managed asset without prior approval or notification. The output may or may not be included in the reporting for planned operations, depending on the results and whether or not it is helpful to the Blue Team.
If these activities are detected by SecOps, they should be treated as potentially malicious and acted upon appropriately. Unless part of a planned operation, there should never be an assumption that suspicious behaviour is a Red Team activity.
Conducting open-source intelligence (OSINT) gathering against non-GitLab managed assets, such as social media sites, is also considered open-scope and may be conducted outside of planned operations.
If an open-scope activity uncovers a vulnerability that puts GitLab at immediate risk of compromise, SecOps will be notified via the official paging procedures.
The goal of a red team operation is often to test our policies and procedures when reacting to an actual threat. This includes identifying suspicious activity and following the appropriate runbook to investigate and respond to that threat.
If any team member, at any time, could simply ask "Hey, this looks suspicious. Is this our red team?" then this opportunity would be lost. Instead, all suspicious activity should be treated as potentially malicious and acted upon accordingly.
Any unannounced red team operation will include "trusted agents" placed strategically across relevant teams. These agents can help ensure the operation provides value by allowing incident response to continue without going too far. For example, we would not want an emulated attack to affect production operations or escalate to third parties.
If suspicious activity is detected and the matter is escalated to a trusted agent, they may know right away whether or not the activity is related to a red team operation. If they are unsure, they can request deconfliction. At this point, the red team will cease all activity until they can answer definitively whether or not they were the source of activity.
If the activity was indeed the red team, they will provide proof and the operation will generally continue. Specific rules for if/when an operation is revealed to all involved will be documented in the original project proposal. This may include provisions for stopping incident response but continuing the red team work to further test technical controls.
If the red team is ever asked "Is this you?" by someone other than a trusted agent, they will respond with the following text:
Thanks for your vigilance! Any suspicious activity should be treated as potentially malicious. If you'd like to contact security, you can follow the process here: https://about.gitlab.com/handbook/security/#contact-gitlab-security.
Red team operations provide an opportunity to practice these processes, and revealing an operation early might mean we miss out on that opportunity. Because of this, we have a policy to neither confirm nor deny whether an activity belongs to us. You can read more about this policy here: https://about.gitlab.com/handbook/engineering/security/security-operations/red-team/#red-team-deconfliction-process.
Every two weeks the Red Team will host Red Team Office Hours. This meeting will be open to the entire company and will alternate between EMEA and APAC friendly times. For the most part these will be an open discussion with members of the Red Team but we will also use this time to perform "read outs" of recently completed Red Team Operations. Note that in some cases, depending on the content these will not be recorded or made public.